Search Results

Search found 879 results on 36 pages for 'karthick rm'.

Page 11/36 | < Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >

  • Makefile - Dependency generation

    - by Profetylen
    I am trying to create a makefile that automatically compiles and links my .cpp files into an executable via .o files. What I can't get working is automated (or even manual) dependency generation. When i uncomment the below commented code, nothing is recompiled when i run make build. All i get is make: Nothing to be done for 'build'., even if x.h (or any .h file) has changed. I've been trying to learn from this question: Makefile, header dependencies, dmckee's answer, especially. Why isn't this makefile working? Clarification: I can compile everything, but when I modify any header file, the .cpp files that depend on it aren't updated. So, if I for instance compile my entire source, then I change a #define in the header file, and then run make build, and I get Nothing to be done for 'build'. (when I have uncommented either commented chunks of the below code). CC=gcc CFLAGS=-O2 -Wall LDFLAGS=-lSDL -lstdc++ SOURCES=$(wildcard *.cpp) OBJECTS=$(patsubst %.cpp, obj/%.o,$(SOURCES)) TARGET=bin/test.bin # Nothing happens when i uncomment the following. (automated attempt) #depend: .depend # #.depend: $(SOURCES) # rm -f ./.depend # $(CC) $(CFLAGS) -MM $^ >> ./.depend; # #include .depend # And nothing happens when i uncomment the following. x.cpp and x.h are files in my project. (manual attempt) #x.o: x.cpp x.h clean: rm -f $(TARGET) rm -f $(OBJECTS) run: build ./$(TARGET) debug: build nm $(TARGET) gdb $(TARGET) build: $(TARGET) $(TARGET): $(OBJECTS) @mkdir -p $(@D) $(CC) $(LDFLAGS) $(OBJECTS) -o $@ obj/%.o: %.cpp @mkdir -p $(@D) $(CC) -c $(CFLAGS) $< -o $@ include $(DEPENDENCIES)

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • git post-receive hook throws "command not found" error but seems to run properly and no errors when run manually

    - by Ben
    I have a post-receive hook that runs on a central git repository set up with gitolite to trigger a git pull on a staging server. It seems to work properly, but throws a "command not found" error when it is run. I am trying to track down the source of the error, but have not had any luck. Running the same commands manually does not produce an error. The error changes depending on what was done in the commit that is being pushed to the central repository. For instance, if 'git rm ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Removed: command not found" and if 'git add ' was committed and pushed to the central repo the error message will be "remote: hooks/post-receive: line 16: Merge: command not found". In either case the 'git pull' run on the staging server works correctly despite the error message. Here is the post-receive script: #!/bin/bash # # This script is triggered by a push to the local git repository. It will # ssh into a remote server and perform a git pull. # # The SSH_USER must be able to log into the remote server with a # passphrase-less SSH key *AND* be able to do a git pull without a passphrase. # # The command to actually perform the pull request on the remost server comes # from the ~/.ssh/authorized_keys file on the REMOTE_HOST and is triggered # by the ssh login. SSH_USER="remoteuser" REMOTE_HOST="staging.server.com" `ssh $SSH_USER@$REMOTE_HOST` # This is line 16 echo "Done!" The command that does the git pull on the staging server is in the ssh user's ~/.ssh/authorized_keys file and is: command="cd /var/www/staging_site; git pull",no-port-forwarding,no-X11-forwarding,no-agent-forwarding, ssh-rsa AAAAB3NzaC1yc2EAAAABIwAA... (the rest of the public key) This is the actual output from removing a file from my local repo, committing it locally, and pushing it to the central git repo: ben@tamarack:~/thejibe/testing/web$ git rm ./testing rm 'testing' ben@tamarack:~/thejibe/testing/web$ git commit -a -m "Remove testing file" [master bb96e13] Remove testing file 1 files changed, 0 insertions(+), 5 deletions(-) delete mode 100644 testing ben@tamarack:~/thejibe/testing/web$ git push Counting objects: 3, done. Delta compression using up to 2 threads. Compressing objects: 100% (2/2), done. Writing objects: 100% (2/2), 221 bytes, done. Total 2 (delta 1), reused 0 (delta 0) remote: From [email protected]:testing remote: aa72ad9..bb96e13 master -> origin/master remote: hooks/post-receive: line 16: Removed: command not found # The error msg remote: Done! To [email protected]:testing aa72ad9..bb96e13 master -> master ben@tamarack:~/thejibe/testing/web$ As you can see the post-receive script gets to the echo "Done!" line and when I look on the staging server the git pull has been successfully run, but there's still that nagging error message. Any suggestions on where to look for the source of the error message would be greatly appreciated. I'm tempted to redirect stderr to /dev/null but would prefer to know what the problem is.

    Read the article

  • Wildcards not being substituted

    - by user21463
    #!/bin/bash loc=`echo ~/.gvfs/*/DCIM/100_FUJI` rm -f /mnt/fujifilmA100 ln -s "$loc" /mnt/fujifilmA100 For some reason the variable * doesn't get substituted with the only possible value and gets given the value /home/chris/.gvfs/*/DCIM/100_FUJI. Does anyone have an idea of why? Please note: If global expansion fails, the pattern is not substituted. I ran the commands: chris@comp2008:~$ loc=`echo ~/.gvfs/*/DCIM/100_FUJI ` chris@comp2008:~$ echo $loc /home/chris/.gvfs/gphoto2 mount on usb%3A001,008/DCIM/100_FUJI So we can see the expansion should work I have now switched to using: loc = `find ~/.gvfs -name 100_FUJI ` I am just curious why it doesn't work as is. Debugging output using sh -x echo /home/chris/.gvfs/*/DCIM/100_FUJI loc=/home/chris/.gvfs/*/DCIM/100_FUJI rm -f /mnt/fujifilmA100 ln -s /home/chris/.gvfs/*/DCIM/100_FUJI/mnt/fujifilmA100

    Read the article

  • How to detect APC UPS battery usage and run a script when on battery

    - by Andy Arismendi
    I have a couple APC UPS - Smart-UPS RT 6000 RM XL Smart-UPS RT 5000 RM XL Unfortunately the power in my office likes to go out (out of my control) and hence the equipment powered by these UPS shuts down. They power a VMware infrastructure environment (VMware Lab Manager) and what I'd like to do is detect when one is on battery (say has been for x amount of time or has x percentage left) and run a script on this event. What software do I need to detect a on-battery event and have it run a script? Thanks!

    Read the article

  • Is /dev in linux virtual?

    - by user973917
    Today at work a client had rm -rf /dev and ended up deleting two files in /dev/shm that forced his site to no longer work. From what I learned previously /dev is not virtual, but a fellow technician had suggested to reboot the server because /dev is virtual like /proc. Sure enough I rebooted the server and the files that the client rm -rf'd were there. So, my question is; is /dev virtual? Is it the kind of virtual like /proc? Is there more documentation on this? How can I restore the /dev files without a server reboot?

    Read the article

  • mysql: job failded to start. mysqld.sock is missing

    - by Freefri
    How can I fix this and start mysql-server? After /etc/init.d/mysql start or service mysql start I get the message start: "Job failed to start" And after # mysqld I get this: mysqld 121123 11:33:33 [ERROR] Can't find messagefile '/usr/share/mysql/errmsg.sys' 121123 11:33:33 [Note] Plugin 'FEDERATED' is disabled. mysqld: Unknown error 1146 121123 11:33:33 [ERROR] Can't open the mysql.plugin table. Please run mysql_upgrade to create it. 121123 11:33:33 InnoDB: The InnoDB memory heap is disabled 121123 11:33:33 InnoDB: Mutexes and rw_locks use GCC atomic builtins 121123 11:33:33 InnoDB: Compressed tables use zlib 1.2.3.4 121123 11:33:33 InnoDB: Initializing buffer pool, size = 128.0M 121123 11:33:33 InnoDB: Completed initialization of buffer pool 121123 11:33:33 InnoDB: highest supported file format is Barracuda. 121123 11:33:33 InnoDB: Waiting for the background threads to start 121123 11:33:34 InnoDB: 1.1.8 started; log sequence number 1595675 121123 11:33:34 [ERROR] Aborting 121123 11:33:34 InnoDB: Starting shutdown... 121123 11:33:35 InnoDB: Shutdown completed; log sequence number 1595675 121123 11:33:35 [Note] I try to do what mysql say me to do: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And yes, /var/run/mysql is empty: mysql_upgrade Looking for 'mysql' as: mysql Looking for 'mysqlcheck' as: mysqlcheck Running 'mysqlcheck' with connection arguments: '--port=3306' '--socket=/var/run/mysqld/mysqld.sock' mysqlcheck: Got error: 2002: Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) when trying to connect FATAL ERROR: Upgrade failed And this is my file /etc/mysql/my.cnf # cat /etc/mysql/my.cnf |grep sock # Remember to edit /etc/mysql/debian.cnf when changing the socket location. socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock socket = /var/run/mysqld/mysqld.sock Then I try to reinstall mysql from cero: apt-get purge mysql-client mysql-common mysql-server rm -R /var/lib/mysql rm -R /etc/mysql rm -R /var/run/mysqld userdel mysql apt-get install mysql-server mysql-client Then, after typing my root password for mysql I get this error: | Unable to set password for the MySQL "root" user ¦ ¦ ¦ ¦ An error occurred while setting the password for the MySQL administrative ¦ ¦ user. This may have happened because the account already has a password, or ¦ ¦ because of a communication problem with the MySQL server. ¦ ¦ ¦ ¦ You should check the account's password after the package installation. ¦ ¦ ¦ ¦ Please read the /usr/share/doc/mysql-server-5.5/README.Debian file for more ¦ ¦ information. And again I can't start mysql getting the same messages.

    Read the article

  • "php: command not found" after changing PHP system files in OS X

    - by Aurelien Porte
    I wanted to install Symfony on Mac OS X Lion. Apparently, as MAMP was already installed on my computer, there was a problem with the "timezone" field in the php.ini file. I can't remember exactly the error but basically, Symfony installation required a timezone like "Europe/Paris" but MAMP apparently changed that part. Well, it's very vague but I've seen on the web that other people had the same issue. So I tried one of the solution I found (without success) but: It didn't work. I can not use the php command anymore ("-bash: php: command not found"). I can not remember the exact commands I did to go back. Here are some potential relevant commands I found in my history and that correspond with the beginning of my problem, in this order: sudo mv /usr/bin/php /usr/bin/php-old sudo ln -s /Applications/MAMP/bin/php5/bin/php /usr/bin/php rm /usr/bin/php-old sudo cp php.ini.default /etc/php.ini rm php.ini but I don't know anymor in which repertory I was. sudo mv /usr/bin/php-old /usr/bin/php

    Read the article

  • Bash Script help required

    - by Sunil J
    I am trying to get this bash script that i found on a forum to work. Copied it to text editor. Saved it as script.sh chmod 700 and tried to run it. rootdir="/usr/share/malware" day=`date +%Y%m%d` url=`echo "wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:\": :g' |\ awk '{print \"http://lists.clean-mx.com/pipermail/viruswatch/$day/\"$3}'"|sh` filename=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'` links -dump $url$filename | awk '/Up/'|grep "TR\|exe" | awk '{print $2,$8,$10,$11,$12"\n"}' > $rootdir/>$filename dirname=`wget -qO - http://lists.clean-mx.com/pipermail/viruswatch/$day/thread.html |\ awk '/\[Virus/'|tail -n 1|sed 's:": :g' |awk '{print $3}'|sed 's:.html::g'` rm -rf $rootdir/$dirname mkdir $rootdir/$dirname cd $rootdir grep "exe$" $filename |awk '{print "wget \""$5"\""}' | sh ls *.exe | xargs md5 >> checksums mv *.exe $dirname rm -r $rootdir/*exe* mv checksums $rootdir/$dirname mv $filename $rootdir/$dirname I get the following message.. script.sh: line 11: /usr/share/malware/: Is a directory script.sh: line 11: links: command not found

    Read the article

  • Can't delete a directory on external drive (OS X)

    - by Martin Tóth
    I have a brand new Transcend StoreJet 25M3 (external HDD) mounted to MacBook (Leopard 10.5.8) at /Volumes/Transcend. I copied some data from my old Windows (XP) machine on it, and now, after cleaning some stuff up, I wanted to delete some directories, but this is what happened: $ rmdir My\ Pictures/ rmdir: My Pictures/: Operation not permitted Using Finder just asks for password, but does not delete the directory (sound of "moved to Trash" is played). I thought it's some permission "thing", but: $ ls -l drwxrwxrwx 1 martin staff 32768 5 jan 16:11 My Pictures/ $ sudo rm -rf My\ Pictures rm: My Pictures: Operation not permitted I re-mounted, rebooted (thinking that there's some file lock), but that did not help. What might have happened here? How to delete it?

    Read the article

  • Error when running debuild on package source

    - by Chris Wilson
    I'm attempting to build the squeak-vm source but am getting an error every time I do so. The output is: dpkg-buildpackage -rfakeroot -D -us -uc dpkg-buildpackage: export CFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export CPPFLAGS from dpkg-buildflags (origin: vendor): dpkg-buildpackage: export CXXFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export FFLAGS from dpkg-buildflags (origin: vendor): -g -O2 dpkg-buildpackage: export LDFLAGS from dpkg-buildflags (origin: vendor): -Wl,-Bsymbolic-functions dpkg-buildpackage: source package squeak-vm dpkg-buildpackage: source version 1:4.0.3.2202-2 dpkg-buildpackage: source changed by José L. Redrejo Rodríguez <[email protected]> dpkg-source --before-build squeak-vm-4.0.3.2202 dpkg-buildpackage: host architecture i386 fakeroot debian/rules clean dh_testdir dh_testroot rm -f build-stamp configure-stamp rm -f unix/cmake/config.sub unix/cmake/config.guess /usr/bin/make -f debian/rules unpatch make[1]: Entering directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' QUILT_PATCHES=debian/patches \ quilt --quiltrc /dev/null pop -a -R || test $? = 2 Patch linex.patch does not remove cleanly (refresh it or enforce with -f) make[1]: *** [unpatch] Error 1 make[1]: Leaving directory `/home/notgary/Projects/squeak/squeak-vm-4.0.3.2202' make: *** [clean] Error 2 dpkg-buildpackage: error: fakeroot debian/rules clean gave error exit status 2 debuild: fatal error at line 1337: dpkg-buildpackage -rfakeroot -D -us -uc failed

    Read the article

  • When running a shell script, how can you protect it from overwriting or truncating files?

    - by Joseph Garvin
    If while an application is running one of the shared libraries it uses is written to or truncated, then the application will crash. Moving the file or removing it wholesale with 'rm' will not cause a crash, because the OS (Solaris in this case but I assume this is true on Linux and other *nix as well) is smart enough to not delete the inode associated with the file while any process has it open. I have a shell script that performs installation of shared libraries. Sometimes, it may be used to reinstall versions of shared libraries that were already installed, without an uninstall first. Because applications may be using the already installed shared libraries, it's important the the script is smart enough to rm the files or move them out of the way (e.g. to a 'deleted' folder that cron could empty at a time when we know no applications will be running) before installing the new ones so that they're not overwritten or truncated. Unfortunately, recently an application crashed just after an install. Coincidence? It's difficult to tell. The real solution here is to switch over to a more robust installation method than an old gigantic shell script, but it'd be nice to have some extra protection until the switch is made. Is there any way to wrap a shell script to protect it from overwriting or truncating files (and ideally failing loudly), but still allowing them to be moved or rm'd? Standard UNIX file permissions won't do the trick because you can't distinguish moving/removing from overwriting/truncating. Aliases could work but I'm not sure what entirety of commands need to be aliased. I imagine something like truss/strace except before each action it checks against a filter whether to actually do it. I don't need a perfect solution that would work even against an intentionally malicious script. Ideas I have so far: Alias cp to GNU cp (not the default since I'm on Solaris) and use the --remove-destination option. Alias install to GNU install and use the --backup option. It might be smart enough to move the existing file to the backup file name rather than making a copy, thus preserving the inode. "set noclobber" in ~/.bashrc so that I/O redirection won't overwrite files

    Read the article

  • Ubuntu Desktop does not load

    - by Niklas
    If I login on my Ubuntu 14.04, I get the following desktop: This weird behavior appeared after I executed sudo apt-get update && sudo apt-get upgrade and restarted my computer. Don't know why though. To my Ubuntu I have tried the following (nothing seems to work so far) Fix any broken packages: sudo apt-get update sudo apt-get autoclean sudo apt-get clean sudo apt-get autoremove Locate any broken packages and reinstall them: sudo apt-get install debsums sudo apt-get clean sudo debsums_init sudo debsums -cs sudo apt-get install --reinstall $(sudo dpkg -S $(sudo debsums -c) | cut -d : -f 1 | sort -u) Removing some compiz files: rm -r ~/.cache/compizconfig-1 rm -r ~/.compiz Purging of NVIDIA and installing NVIDIA-prime: sudo apt-get install --reinstall ubuntu-desktop sudo apt-get install unity sudo apt-get purge nvidia* bumblebee* sudo apt-get install nvidia-prime sudo shutdown -r now Compizconfig Settings Manager: sudo apt-get install compizconfig-settings-manager export DISPLAY=:0 ccsm // Back to UI and enablement of Unity Plugin Unity replace, which stopped at a while and did nothing afterwards unity --replace Some dconf reset dconf reset -f /org/compiz/ unity --reset-icons &disown Actually dconf did not work and I got this error: error: Cannot autolaunch D-Bus without X11 $DISPLAY Can anybody help me on that? This is my hardware (hope it helps in any way): Intel® Core™ i7-3770 ASUS GTX660TI-DC2-OG-2GD5 (NVIDIA driver is/was installed) ASUS P8Z77-V LX Corsair DIMM 8 GB DDR3-1600 Kit Samsung 830series 2,5" 256 GB (Windows is installed here) Seagate ST31000524AS 1 TB (3/4 are reserved for files; 1/4 is for Ubuntu (16GB swap included))

    Read the article

  • How to install custom c library?

    - by arijit
    I just wanted to add a c library to Ubuntu which was created by Harvard University for cs50 course. They provided instructions for how to install the library which is listed below. Debian, Ubuntu First become root, as with: sudo su - Then install the CS50 Library as follows: apt-get install gcc wget http://mirror.cs50.net/library/c/cs50-library-c-3.1.zip unzip cs50-library-c-3.1.zip rm -f cs50-library-c-3.1.zip cd cs50-library-c-3.1 gcc -c -ggdb -std=c99 cs50.c -o cs50.o ar rcs libcs50.a cs50.o chmod 0644 cs50.h libcs50.a mkdir -p /usr/local/include chmod 0755 /usr/local/include mv -f cs50.h /usr/local/include mkdir -p /usr/local/lib chmod 0755 /usr/local/lib mv -f libcs50.a /usr/local/lib cd .. rm -rf cs50-library-c-3.1 I did exactly as directed. But the compiler reported “Undefined reference to a function”--the function was Get String. So, I searched for a solution and found one. It said to use the -l switch. Now when I compile I use something like: gcc –o hello.c hello –lcs50 (I don’t remember the exact command.) However, I cannot use the make command, which is easier to use. I understand that there is some problem with linking the library. What is a good solution to this problem?

    Read the article

  • OpenSSL without prompt

    - by JP19
    Hi, I am using following code to generate keys: apt-get -qq -y install openssl; mkdir -p /etc/apache2/ssl; openssl genrsa -des3 -out server.key 1024; openssl req -new -key server.key -out server.csr; cp server.key server.key.org; openssl rsa -in server.key.org -out server.key; openssl x509 -req -days 12000 -in server.csr -signkey server.key -out server.crt; mv server.crt /etc/apache2/ssl/cert.pem; mv server.key /etc/apache2/ssl/cert.key; rm -f server.key.orig; rm -f server.csr How can I skip the passphrase prompting? thanks JP

    Read the article

  • Need to run a .sh as root on boot or login

    - by Graymayre
    Still new with linux and running ubuntu 12.10 I have a wireless stick (ae2500) which has known issues that has been partially solved using ndiswrapper. However, to use it I must run the same scripts every time I reboot, effectively uninstalling and reinstalling the driver. I made a .sh file to run every time to make it easy, but I must do the sudo login everytime. There are three solutions I am looking for and although not all are necessary to solve this particular problem, I would still like to know them all for learning purposes. run scripts or file.sh on boot (as well as other programs) run scripts or file.sh automatically with root privileges make the install permanent so as not to have to go through the process every time. Any additional information that can help me regarding this that I did not think to ask (including streamlining my commands), or general knowledge, would be greatly appreciated. Following are the contents of the file. I pretty much just made it as I would have entered it. cd ~/ndiswrapper-1.58rc1 sudo modprobe -rf ndiswrapper sudo rm /etc/modprobe.d/ndiswrapper.conf sudo rm -r /etc/ndiswrapper/* sudo depmod -a sudo make uninstall sudo make sudo make install sudo ndiswrapper -i bcmwlhigh5.inf ndiswrapper -l sudo modprobe ndiswrapper

    Read the article

  • Accidentally deleted /opt/local/bin without backup. Any help?

    - by Aaron
    Hi all, I'm on a Mac OS X 10.5.8 I was recently uninstalling mysql5 from /opt/local/bin. I typed: rm -rf /opt/local/bin mysql* instead of rm -rf /opt/local/bin/mysql* This deleted my entire /opt/local/bin directory which puts me in a bit of a bind. Is there any way to recover these files? If not, I have a friend that is using a similar set of programs, would it be possible to use the contents of his folder? If I end up needing to re-install everything in this folder, what is the best way to go about doing this? Thanks in advance!

    Read the article

  • Using rsync with link-dest from HFS to NTFS

    - by Tom
    Hi, I'm having a problem with rsync. I'm on a Mac and I'd like to sync my everyday's changes from my HFS+ partition to my NTFS formated networked drive. Pretty simple, and everything goes well except that it syncs every file each times. Here's my script: #! /bin/sh snapshot_dir=/Volumes/USB_Storage/Backups snapshot_id=`date +%Y%m%d%H%M` /usr/bin/rsync -a \ --verbose \ --delete --delete-excluded \ --human-readable --progress \ --one-file-system \ --partial \ --modify-window=1 \ --exclude-from=.backup_excludes \ --link-dest ../current \ /Users/tommybergeron/Desktop/Brainpad \ $snapshot_dir/in-progress cd $snapshot_dir rm -rf $snapshot_id mv in-progress $snapshot_id rm -f current ln -s $snapshot_id $snapshot_dir/current Could someone help me out please? I've been searching for like two hours and I still am clueless. Thanks so much.

    Read the article

  • Why can't I install from software center?

    - by user64720
    There was a problem upgrading to Firefox 13. This error kept returning: /var/cache/apt/archives/firefox_13.0+build1-0ubuntu0.12.04.1_i386.deb W: Waited for dpkg --assert-multi-arch but was not there - dpkgGo (10: There are no "child" processes). Now it seems that there is some problem with dpkg and I can't install anything from software center. I already tried to clean previous packages with sudo rm /var/lib/apt/lists/* -vf and then sudo apt-get update, it didn't work. When running sudo dpkg --configure -a, I get this: dpkg: problems with dependencies prevent the configuration of firefox-globalmenu: firefox-globalmenu depends on firefox (= 13.0+build1-0ubuntu0.12.04.1); however: The package is not installed. dpkg: error while processing firefox-globalmenu (--configure): problems com dependencies - leaving unconfigured There has been found errors while processing: firefox-globalmenu What should I do to fix this?? EDIT: I don't have the necessary expertise to understand why what I did worked and what was causing the conflict, but anyway, since there was a problem with firefox-globalmenu:, I went to synaptics package manager, I removed this particular package and reinstalled it. After that, I was able to install Firefox from synaptics and also any other applications from software center. However, still there was a problem, when running sudo apt-get update, the following kept returning: Failed to get gzip:/var/lib/apt/lists/partial/archive.ubuntu.com_ubuntu_dists_precise_main_binary-i386_Packages Verification code hash doesn't match. E: Some archives index failed at being downloaded. They have been ignored, or older copies are used instead. So I typed sudo rm /var/lib/apt/lists/* -vf in terminal and then again sudo apt-get update and everything is fine now. I did this before an answer was posted, anyway I agree the problem was that particular package and its removal. So I'll mark the below answer as accepted.

    Read the article

  • How does one delete a directory filled with files and other subdirectory permanently, bypassing the trash, from the command line in OS X?

    - by Jon
    So my command line skills are a little rusty and I'm having trouble remembering the differences between the meanings of flags in different distro's os's. I also don't really remember all my technical lingo so manpages seem really unclear. Basically I'm on Mac OS X and want to delete a directory along with all of its contents. What I'm mainly concerned about, I suppose, is that it'll delete literally ALL of the references within the directory, including ../ and ../<everything else, including ../'s own ../> and then just totally screw up my entire system. Which of these do I want to run? $ rm -R dir-name/ or $ rm -r

    Read the article

  • Elastix, how to MOVE files from one server to other server?

    - by yudayyy
    In my office, i have to schedule for moving a file from one computer to other computer (Both are using Elastix). My idea is using cron, scp, and rm to do this. So here are the script that i use: scp -r /home/data/* [email protected]:/home/data1 && rm -r /home/data/* That script did the copy, but not remove the source file. I already read this question: Hov to _MOVE_ files with scp? The problem is, the computer doesn't have an internet connection. So i cannot install rsync on my elastix computer. yum install rsync Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile and then it freezes. Any idea how to do this?

    Read the article

  • Why does deleting from the command line take significantly less time than from a GUI?

    - by Jordan Plahn
    So this is probably the dumbest question you'll read today, but it's something I just wondered about as I was deleting a dozen or so images from my computer. With a quick rm -rf command on the directory's contents, all the images were gone in a snap. When I drag the same dozen or so images to a trash can/recycle ban, it takes sometimes 10 seconds or more. Now I'm sure some of it comes from the overhead of the GUI and such, and some of it may be the fact that the file still "exists" in some form if it's put into the recycle bin, but is there anything else that accounts for such a huge time disparity? Are "rm" and "delete" just such fundamentally different commands so I'm trying to compare apples and oranges? Enlighten me, please!

    Read the article

  • How to remove a directory which looks corrupted

    - by hap497
    Hi, I am using ubuntu 9.10. And I only directory, it shows as '?' for user/ownership. How can I remove it? -rw-r--r-- 1 hap497 hap497 1822 2010-01-28 22:48 IntSizeHash.h d????????? ? ? ? ? ? .libs/ -rw-r--r-- 1 hap497 hap497 194 2010-02-25 12:12 libwebkit_1_0_la-BitmapImage.lo I have tried "$ sudo rm -Rf .libs rm: cannot remove `.libs': Input/output error" Thank you for any pointers.

    Read the article

  • Software Center does not load

    - by eim
    I'm having problems with opening my Software center and it just shuts off after loading a few seconds. I can't even get it to the main page of the Software Center. I did try to follow these commands but of no avail: sudo apt-get purge software-center sudo apt-get update sudo apt-get install software-center Instead, I get an error after entering the first command: eim@eim-VAIO:~$ sudo apt-get purge software-cente Reading package lists... Error! E: Encountered a section with no Package: header E: **Problem with MergeList** /var/lib/apt/lists/security.ubuntu.com_ubuntu_dists_precise-security_universe_i18n_Translation-en E: The package lists or status file could not be parsed or opened. I tried doing this aswell: Run : cd ~/.cache; rm -r software-center (nothing happened) And this: Add /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 to the Startup applications error message: eim@eim-VAIO:~$ /usr/lib/policykit-1-gnome/polkit-gnome-authentication-agent-1 Gtk-Message: Not loading module "atk-bridge": The functionality is provided by GTK natively. Please try to not load it. ** (polkit-gnome-authentication-agent-1:3563): WARNING **: Unable to register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject Cannot register authentication agent: GDBus.Error:org.freedesktop.PolicyKit1.Error.Failed: An authentication agent already exists for the given subject I think I've done all the possible fix to this problem as suggested on my research. But I can't seem to get this work. Can someone please help? NOTE: Okay... Guess I just found the solution to my problem. I'll just post the answer here since I can't answer my own question yet. Open terminal: sudo rm /var/lib/apt/lists/* -vf sudo apt-get update Now I can open my Software Center! I found the answer here: How do I fix a "Problem with MergeList" error when trying to do an update?

    Read the article

  • init.d service died

    - by jerluc
    Adapting some code from a linux forum, I've added a service script to /etc/init.d on my ubuntu natty server to start/stop/restart node.js It literally was working the first day I made it, but then today, after viewing my website this morning, the server threw a 404, and upon further inspection, the node.js process was gone. So I went to start the service again, only this time, node.js didn't start at all, and ever since I haven't been able to get my service script working. Below is the entire script: #!/bin/sh # # Node Server Startup # case "$1" in start) echo -n "Starting node: " daemon node /usr/local/www/server.js echo touch /var/lock/subsys/node ;; stop) echo -n "Shutting down node: " killall node echo rm -f /var/lock/subsys/node rm -f /var/run/node.pid ;; status) status node ;; restart) $0 stop $0 start ;; reload) echo -n "Reloading node: " killall node -HUP echo ;; *) echo "Usage: $0 {start|stop|restart|reload|status}" exit 1 esac exit 0 Thanks for any help!

    Read the article

< Previous Page | 7 8 9 10 11 12 13 14 15 16 17 18  | Next Page >