Search Results

Search found 27581 results on 1104 pages for 'execute command'.

Page 331/1104 | < Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >

  • Ubuntu 12.04.2 won't boot after bumblebee instalation

    - by Andrej
    First of all sorry for my English, it's not my first language. Here is what I have done: I had a working ubuntu 12.04 with all updates and working bumblebee, so I could do optirun command and battery life was better than without bumblebee. Than I decided to reinstall both my systems installed windows 7 and ubuntu. Reinstalled Windows 7 all working as expected, than on other partition installed ubuntu 12.04. All worked perfectly. Than I installed bumblebee according to the procedure written here https://wiki.ubuntu.com/Bumblebee same steps that I used before. But now after I install drivers and do all written in procedure and I reboot my notebook system won't boot, it is simply stuck at black screen after short showing of start screen. I reinstalled ubuntu many times already and tried everthing, but when I try install nvidia drivers it won't boot after shutting down notebook and only thing I can do is reinstalling system. I have Lenovo Thinkpad Edge E530 and processor: Intel® Core™ i5-3210M CPU and graphic cards are Intel HD 4000 and Nvidia geforce gt630m After clean install without bumblebee, terminal command lspci| grep VGA is showing: 00:02.0 VGA compatible controller: Intel Corporation Ivy Bridge Graphics Controller (rev 09) 01:00.0 VGA compatible controller: NVIDIA Corporation Device 0de9 (rev a1) Can you suggest a sollution?? Or at least some links to similar topics??

    Read the article

  • Ctrl+Z and fg to append commands

    - by avilella
    I would like to know what is the behaviour of Ctrl+Z and fg in bash when wanting to append commands to be executed after a running command has finished. For example, in the sequence for commands below, I would expect the console to display "1", then "2", then "3", then "4", but I only get the last command, echo 4, after sleep 30 finishes: avilella@magneto:~$ sleep 30 && echo 1 ^Z [1]+ Stopped sleep 30 avilella@magneto:~$ fg && sleep 5 && echo 2 sleep 30 ^Z [1]+ Stopped sleep 30 avilella@magneto:~$ fg && sleep 5 && echo 3 sleep 30 ^Z [1]+ Stopped sleep 30 avilella@magneto:~$ fg && sleep 5 && echo 4 sleep 30 4 Any ideas?

    Read the article

  • Error: Cannot find a valid baseurl for repo: updates in ffmpeg installation

    - by athomas14super
    Hi I have problem installing ffmpeg. I follow this url: https://www.crucialp.com/resources/tutorials/server-administration/how-to-install-ffmpeg-ffmpeg-php-mplayer-mencoder-flv2tool-LAME-MP3-Encoder-libog.php Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# yum install subversion ruby ncurses-devel Loading "installonlyn" plugin Setting up Install Process Setting up repositories core 100% |=========================| 1.1 kB 00:00 rpmforge 100% |=========================| 1.1 kB 00:00 Error: Cannot find a valid baseurl for repo: updates [root@02e7709 src]# svn checkout svn://svn.mplayerhq.hu/ffmpeg/trunk ffmpeg -bash: svn: command not found [root@02e7709 src]# svn command not found and throws error Error: Cannot find a valid baseurl for repo: updates I am installing in fedora core 6 64 bit

    Read the article

  • How do I make my USB Bluetooth dongle work in Ubuntu 11.04 ? (Can't init device hci0: Connection timed out (110)) [closed]

    - by MaikoID
    I've a USB bluetooth dongle root@maiko-cce-lin:~# lsusb | grep Bluetooth Bus 001 Device 007: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) that isn't working properly, hardly-ever it works but stops working in my next reboot. what I've tried it isn't software blocked root@maiko-cce-lin:~# rfkill list 0: phy0: Wireless LAN Soft blocked: no Hard blocked: no 1: hci0: Bluetooth Soft blocked: no Hard blocked: no my device is recognized by hciconfig root@maiko-cce-lin:~# hciconfig -a hci0: Type: BR/EDR Bus: USB BD Address: 00:1F:81:00:01:1C ACL MTU: 1021:4 SCO MTU: 180:1 DOWN RX bytes:330 acl:0 sco:0 events:8 errors:0 TX bytes:24 acl:0 sco:0 commands:30 errors:22 Features: 0xff 0x3e 0x09 0x76 0x80 0x01 0x00 0x80 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: Link mode: SLAVE ACCEPT but I can't turn on my hci interface root@maiko-cce-lin:~# hciconfig hci up Can't init device hci0: Connection timed out (110) I don't understand why.. the hcitool command doesn't show any device. root@maiko-cce-lin:~# hcitool dev Devices: I've tried to restart my bluetooth service too with this command and make all these previous commands again but without success. root@maiko-cce-lin:~# service bluetooth restart * Stopping bluetooth [ OK ] * Starting bluetooth [ OK ] root@maiko-cce-lin:~# The dongle works if you disconnect it from usb, wait a few seconds and connect it again. so there must be better solution for it ( a solution not involving physically removing the dongle!)

    Read the article

  • Sendmail is refusing connection after configuring SMTP relay

    - by coder
    I'm setting up sendmail on my home computer to use with my webserver. I've set it to use my SMTP server provided by my hosting company. If I use the following command, it works sendmail -Am -t -v and then I enter the to and from emails. But if I try the following, it does not work. sendmail -v [email protected] < test.txt The TO email is the same as in the earlier command, but in this case I haven't specified a FROM e-mail, which I think is the problem. My guess is that it's sending the mail from user@localhost causing the smtp server to reject it. If so, how do I make it send from [email protected]?

    Read the article

  • Remote File Copy - Win Server 2008

    - by Scott
    I'd like to copy backup archives from a remote server to my client machine. In the past, I've installed an FTP server on the remote machine and directed local server backups to dump into that directory. I'd then FTP in from my client machine. Just wondering if there is a simpler way to do this using Win 7 (Client) Win Server 2008? Robocopy? RDC command line options? For example, I can easily remote desktop in and drag the files from the server to my local machine. If there is an easy command line way to do this, then I don't have to setup an FTP server which is ideal. Thanks.

    Read the article

  • Powershell and long-running external tools?

    - by leeand00
    I'm trying to compact a MS-Access database using JetComp.exe using a powershell script. Here is the operative lines: # 4. Run JetComp LogWrite("Begin: Running JetComp") .\JETCOMP.EXE -src: $srcDB -dest: $dstDB | Out-Null #Run this command and wait for it to finish... IfErrorExit("Error Compacting Database") LogWrite("End: Running JetComp") The JETCOMP.EXE program seems to complete long before it is actually finished and the $dstDB ends up being smaller than the compact should even make it. Initially ($srcDB) it's about 1.8 GB and by the time the command finishes it's about 300,000 kb (about 0.29 gb) that's a pretty long way off from 1.8 gb which when compacted manually ends up being about 1.6 gb. Is there some sort of timeout I don't know about in powershell scripts? P.S. I know that when running JETCOMP.EXE manually, that the system often detects it as "not responding" even though it's actually getting the job done, and waiting long enough will allow it to complete.

    Read the article

  • Why won't the floppy mount?

    - by dboarman-FissureStudios
    The 1.44Meg floppy won't mount in my Nautilus file browser. When I try to mount it, it says there is no media in the drive. Yet, I can write to the floppy through the terminal using the 'cp' command. I can enter the command: mount -t ext2 /floppy and it mounts. I have also run a check and the disk itself is 100% clean. So, why can't I get the nautilus browser to open up the floppy? Is there a way to see the actual floppy from the terminal?

    Read the article

  • Scripts on UNC paths take very long to run

    - by Álvaro G. Vicario
    I have several scripts in UNC paths (from Windows batch files to PHP scripts). No matter how I run them (double click on explorer, my editor's run command menu or Windows command prompt) they take really long to start running (like 14 seconds). Once they get started they run normally. This doesn't happen if I run them from mapped drives. I'm using Windows XP Professional SP3 inside an Active Directory domain and files are hosted in a Windows Server box (not sure about the version, it's an HP dedicated file server with bundled OS). Why does it happen? Is there a way to speed up things while using UNC paths?

    Read the article

  • Why does "quickly share --ppa share" abort with a "can't create" error?

    - by desgua
    I can not figure out what I am doing wrong. The package builds ok with quickly package, I could submit it, but I can not update my ppa. Here is what I got: desgua@desguai7:~/quickly/sbk$ quickly share --ppa sbk Get Launchpad Settings Launchpad connection is ok ..........An error has occurred when creating debian packaging ERROR: can't create or update ubuntu package ERROR: share command failed Aborting Edit The name of my ppa was wrong, but even using ppa:desgua/sbk still doesn't work: desgua@desguai7:~/quickly/sbk$ quickly share --ppa ppa:desgua/sbk Get Launchpad Settings Traceback (most recent call last): File "/usr/share/quickly/templates/ubuntu-application/share.py", line 101, in launchpad = launchpadaccess.initialize_lpi() File "/usr/lib/python2.7/dist-packages/quickly/launchpadaccess.py", line 91, in initialize_lpi allow_access_levels=["WRITE_PRIVATE"]) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 539, in login_with credential_save_failed, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 359, in _authorize_token_and_login service_root, cache, timeout, proxy_info, version) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 198, in __init__ credentials, service_root, cache, timeout, proxy_info, version) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/resource.py", line 460, in __init__ self._wadl = self._browser.get_wadl_application(self._root_uri) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 299, in get_wadl_application response, content = self._request(url, media_type=wadl_type) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 242, in _request str(url), method=method, body=data, headers=headers) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 211, in _request_and_retry url, method=method, body=body, headers=headers) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1414, in request (response, new_content) = self._request(conn, authority, uri, request_uri, method, body, headers, redirections, cachekey) File "/usr/lib/python2.7/dist-packages/launchpadlib/launchpad.py", line 126, in _request LaunchpadOAuthAwareHttp, self)._request(*args) File "/usr/lib/python2.7/dist-packages/lazr/restfulclient/_browser.py", line 130, in _request redirections, cachekey) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1196, in _request (response, content) = self._conn_request(conn, request_uri, method, body, headers) File "/usr/lib/python2.7/dist-packages/httplib2/__init__.py", line 1138, in _conn_request raise ServerNotFoundError("Unable to find the server at %s" % conn.host) httplib2.ServerNotFoundError: Unable to find the server at api.launchpad.net ERROR: share command failed Aborting Any ideas? How could I troubleshot this error?

    Read the article

  • CentOS 6.5 proxy bypass/no_proxy not working

    - by Naruto Uzumaki
    I am running CentOS 6.5 on my desktop. I've set the Network Proxy using the network proxy application provided under Preferences. I've also set the following exceptions: localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16 But whenever I am using wget (I'm testing the proxy settings using using wget) then wget tries to connect to the proxy for private addresses, but wget localhost works fine and doesn't use the proxy. I also removed all the proxy settings and set the proxy in the shell: export http_proxy="<proxy_url>:<port>" export https_proxy="<proxy_url>:<port>" export no_proxy="localhost,127.0.0.0/8,172.16.0.0/12,192.168.0.0./16" It work when I use the command wget <external_url> or wget localhost but fails when I use the command wget <private address from the $no_proxy variable>. I also tried setting the variables in Ubuntu 14.04 also and facing the same issue. Regards,

    Read the article

  • "svn: Cannot negotiate authentication mechanism" for OSX CLI and WinXp TortoiseSVN, but linux CLI works

    - by dacracot
    I had a working subversion server which used the passwd file which stores passwords in clear text. My requirements changed so that passwords now need to be encrypted. I did everything according to the book to use SASL, or so I believe, but now only the linux command line can authenticate. My OSX users, which also use command line, and my WinXp users, which use TortoiseSVN get errors. Linux versions are 1.6.11. OSX versions are 1.6.17. And TortoiseSVN versions are 1.7.4. /opt/subversion/QRpage/conf/svnserve.conf: [general] anon-access = none auth-access = write realm = ABC [sasl] use-sasl = true min-encryption = 128 max-encryption = 256 /etc/sasl2/svn.conf: pwcheck_method: auxprop auxprop_plugin: sasldb sasldb_path: /etc/sasldb2 mech_list: DIGEST-MD5 Then I add new users via: saslpasswd2 -c -f /etc/sasldb2 -u ABC dacracot But for instance OSX users get this error trying to check out: $ svn co svn://svn.nowhere.org/QRpage svn: Cannot negotiate authentication mechanism

    Read the article

  • Strange Misleading Error[XML -2018/ AC-10006] when doing the R12 Cloning

    - by [email protected]
    During the recent Multi Node to Single Node R12 Clone, Encountered an strange error. When doing the database portion of the clone. Below command 'adclonectx.pl' creates the Context file perl adclonectx.pl contextfile=$ORACLE_HOME/appsutil/SOURCE_CONTEXT_FILE.xml template=$ORACLE_HOME/appsutil/template/adxdbctx.tmp pairsfile=$ORACLE_HOME/appsutil/clone/pairsfile.txt initialnode   When running the same command, It dumped the below error,   file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. AC-10006: Exception - org.xml.sax.SAXParseException: file:/tmp/tmpCtxClone.xml<Line 1, Column 1>: XML-20108: (Fatal Error) Start of root element expected. thrown while creating OAVars object for file: /tmp/tmpCtxClone.xml The new database context file has been created :   /opt/oracle/product/11.1.0_IOFT/appsutil/IOFT_frws35ta.xml   At first site, I suspected that the issue is with format of the source xml file. Hence compared with the working XML file. Result is clean. Below portion of the error struck me Thrown while creating OAVars object for file: /tmp//dummy.xml   Cause : The /tmp is 100% full.   Fix: Either remove the old files in /tmp  directory  OR  export TEMP=/new/location where there is plenty of free space.

    Read the article

  • Framework 4 Features: User Propogation to the Database

    - by Anthony Shorten
    Once of the features I mentioned in a previous entry was the ability for Oracle Utilities Application Framework V4 to automatically propogate the end user to the database connection. This bears more explanation. In the past releases of the Oracle Utilities Application Framework, all database connections are pooled and shared within a channel of access. So for example, the online connections on the Business Application Server share a common pool of connections and the batch in a thread pool shares a seperate pool of connections. The connections are pooled for performance reasons (the most expensive part of a typical transaction is opening and closing connections so we save time by having them ready beforehand). The idea is that when a business function needs some SQL to be execute it takes a spare connection from the pool, executes the SQL and then returns the connection back to the pool for reuse. Unfortunelty to support the pool being started and ready before the transactions arrives means that you need to have a shared userid (as you dont know the users who need them beforehand). Therefore each connection uses the same database user to execute the SQL it needs. This is acceptable for executing transactions, generally but does not allow the DBA or other tools to ascertain which end user is actually running the transaction. In Oracle Utilities Application Framework V4, we now set the CLIENT_IDENTIFIER to the end userid (not the Login Id) when the connection is taken from the pool and used and reset it back to blank when returned to the pool. The CLIENT_IDENTIFIER is a feature that is present in the Oracle Database connection information. From a monitoring perspective, when a connection to the database is actively running SQL, the end user is now able to be determined by querying the CLIENT_IDENTIFIER on the session object within the database. This can be done in the DBA's favorite monitoring tool (even just some SQL on the v$session table is enough). This has other implications as well. Oracle sells a lot of other security addons to the database and so do third parties. If a site wants to have additional levels of security or auditing in the database then the CLIENT_IDENTIFIER, if supported, is now available to be recorded or used by those products to provide additional levels of security. This facility was one of the highly "nice to haves" that customers would ask us about so we now allow it to be used to allow finer grained monitoring and additional security facilities. Note: This facility is only available for customers using the Oracle Database versions of our products.

    Read the article

  • CentOS listen to everything on the wire

    - by Poni
    I know there's a native command on linux that will output (to stdout) every "event" related to a certain network interface (be it eth0 etc'). Like there's tail -f <file> to listen on file changes.. I just can't find it. I want to see all events, incoming packets, even dropped ones. At lowest level possible. In every protocol (TCP, UDP etc'). I think WireShark is a bit too big for this as I need something very simple just to see the events, it's for testing. What's the command?

    Read the article

  • WOL not working

    - by Maciej Swic
    I have a Marvell Yukon integrated NIC and i have installed the WOL package on my freeBSD-based NAS. I'm trying to wake my PC using the command "wol M:A:C:A:D:D:R". Command line spits back that it is "waking" however nothing happens. I found no reference to WOL whatsoever in BIOS and i enabled Magic Packet WOL in windows on that interface. Also double-checked MAC addr and that i entered it in the correct format in "wol". I'm on Windows 7. What next? =/

    Read the article

  • How do you change your Airport or Ethernet MAC address in Mac OS X 10.6?

    - by Dave Gallagher
    I have a MacBook Pro and would like to set a custom MAC address for either my Airport WiFi card, or Ethernet port. In older versions of Mac OS X, you could do it like this: $ sudo ifconfig en0 ether 00:11:22:33:44:55 // Ethernet $ sudo ifconfig en1 lladdr AA:BB:CC:DD:EE:FF // Airport For it to work on Airport, you'd have to power it on (e.g. $ sudo ifconfig en1 up), ensure it's not connected to any wireless network, and execute the command. I'm aware such a change won't propagate across reboots. Unfortunately, this doesn't work on Mac OS X 10.6.6 anymore. Apple appears to have removed the functionality (the command fails silently). Does anyone have any idea how to do it? Thanks for any help you can offer! :)

    Read the article

  • OS X 10.6 Snow Leopard no longer mounting an external USB drive

    - by Brant Bobby
    I have a 1TB generic external hard drive containing a single HFS partition. I originally formatted this using Disk Utility and it worked fine. Now, for some reason, it's not auto-mounting when I start up. Using mount at the command line gives the following error: $ sudo mount /dev/disk1s2 /Volumes/Test /dev/disk1s2 on /Volumes/Test: Incorrect super block. ... but if I use the mount_hfs command it works fine, mounts, and is readable. $ mount_hfs /dev/disk1s2 /Volumes/Test/ fsck gives me an error about a bad super block: $ fsck /dev/disk1 ** /dev/rdisk1 (NO WRITE) BAD SUPER BLOCK: MAGIC NUMBER WRONG ... but fsck_hfs -fn /dev/disk1s2 doesn't find any problems and reports that the volume appears to be OK. In Disk Utility, the drive appears to have a single MS-DOS partition with a curious notice about how it appears to be partitioned for Boot Camp: I have the Boot Camp HFS driver installed in WIndows 7, and that OS sees the drive/partition normally. What's wrong with my disk?

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • Git Daemon on linux?

    - by bwawok
    Trying to set up a simple git-daemon on a linux server, and talk to it from a windows box. On linux server: Make a folder /home/foo/bar CD to /home/foo/bar do a git --bare init here Do a touch git-daemon-export-ok CD to /home/foo Run the command git-daemon --verbose --reuseaddr --base-path=/home/foo --enable=receive-pack On Windows Client w tortoise Git Do git.exe clone --progress -v "git://servername/bar" "C:\source\myFolderName" (works) Create file a.txt, add it to git, and commit (works) Do a git.exe pull "origin" master and then get fatal: Couldn't find remote ref master (makes sense, master isn't there yet) Do a git.exe push "origin" master:master and tortoise hangs forever without do anything I realize why I can't pull from master yet on the remote branch.. but why can't I push my first commit into the remote repo? #4 really should work. Tried it both with tortoise and the mysysgit command line, both cases I hang forever. What am I missing? Server has no useful log

    Read the article

  • How to ensure images all loaded before I reference in my HTML canvas [closed]

    - by mark stephens
    I want to draw some images in on a HTML canvas with context.drawImage(Im1 ,205,18,184,38); In order to make sure it loads I need to put in code like this but then I cannot draw things with it var Im1 = new Image(); Im1.src="rechnung11014page1/img/1/Im1.png"; Im1.onload = function() { context.drawImage(Im1 ,205,18,184,38); } Is there a way to load all the images and then execute a block of code using several images?

    Read the article

  • Encryption of OS X to Windows SMB traffic and password

    - by Brian
    I connected to a Windows Server 2008 R2 shared folder from a Mac OS X Mountain Lion computer over the Internet using this command: mount -t smbfs //user@server/path/to/share local_folder Is traffic encrypted by default? What settings do I look at (if any) to know whether it was encrypted? If it wasn't encrypted, what's the easiest way to encrypt it? Was the password I typed at the command line encrypted? Update: sysadmin1138 has addressed the password question. Does anyone know how I can tell if the traffic itself is being encrypted?

    Read the article

< Previous Page | 327 328 329 330 331 332 333 334 335 336 337 338  | Next Page >