Search Results

Search found 34195 results on 1368 pages for 'try'.

Page 926/1368 | < Previous Page | 922 923 924 925 926 927 928 929 930 931 932 933  | Next Page >

  • Installing MySQL 5.1 on OS X 10.7 Lion

    - by xisal
    I am trying to install MySQL 5.1. I am on Lion, and when I remove all files associated with MySQL on my machine it still tells me that I have a newer version installed when I try to install it from the DMG file. Has anyone successfully installed MySQL 5.1 on Lion? I found a solution using Homebrew: Completely remove MySQL from your system (just in case) sudo rm /usr/local/mysql sudo rm -rf /usr/local/mysql* sudo rm -rf /Library/StartupItems/MySQLCOM sudo rm -rf /Library/PreferencePanes/My* vim /etc/hostconfig and removed the line MYSQLCOM=-YES- rm -rf ~/Library/PreferencePanes/My* sudo rm -rf /Library/Receipts/mysql* sudo rm -rf /Library/Receipts/MySQL* sudo rm -rf /var/db/receipts/com.mysql.* Source:http://stackoverflow.com/questions/1436425/how-do-you-uninstall-mysql-from-mac-os-x Install homebrew /usr/bin/ruby -e "$(curl -fsSL https://raw.github.com/gist/323731)" Source: https://github.com/mxcl/homebrew/wiki/installation Install MySQL 5.1 via brew brew install mysql51 if that doesn't work, do this: brew install https://raw.github.com/adamv/homebrew-alt/master/versions/mysql51.rb Source: http://stackoverflow.com/questions/4359131/brew-install-mysql-on-mac-os/6399627#6399627 Make MySQL Work Create mysql.sock file touch /tmp/mysql.sock Install MySQL default tables /usr/local/Cellar/mysql51/5.1.58/bin/mysql_install_db ...or your path Source: http://stackoverflow.com/questions/4788381/getting-cant-connect-through-socket-tmp-mysql-when-installing-mysql-on-ma/5140849#5140849

    Read the article

  • copSSH and cygwin - Can't use windows style paths

    - by DrFredEdison
    I setup copSSH on one of my windows servers, and within the copSSH bash shell, I can't seem to use windows-style paths to remove and copy files. If I do try, I get the following: $ /bin/cp -r C:/Domains/_temp/collage_push/* C:/Domains/collage/ cygwin warning: MS-DOS style path detected: C:/Domains/_temp/collage_push/ Preferred POSIX equivalent is: /cygdrive/c/Domains/_temp/collage_push/ CYGWIN environment variable option "nodosfilewarning" turns off this warning. Consult the user's guide for more details about POSIX paths: http://cygwin.com/cygwin-ug-net/using.html#using-pathnames I have created a windows environment variable CYGWIN set to nodosfilewarning. It has no effect. I added export CYGWIN=nodosfilewarning to my .bashrc and doing a echo $CYGWIN in my ssh session confirms it is indeed getting set; yet again, it has no effect finally, I noted that when not doing my own export that CYGWIN contains "nontsec binmode" (no quotes), so I tried: export CYGWIN="nodosfilewarning nontsec binmode" in my .bashrc and still no dice. Older versions of CopSSH didn't have this issue. How can I actually override this error? I have a lot of scripts that already use windows-style paths, and I'd rather not change them if possible.

    Read the article

  • Apache2 config variable is not defined

    - by Kurt Bourbaki
    I installed apache2 on ubuntu 13.10. If I try to restart it using sudo /etc/init.d/apache2 restart I get this message: AH00558: apache2: Could not reliably determine the server's fully qualified domain name, using 127.0.1.1. Set the 'ServerName' directive globally to suppress this message So I read that I should edit my httpd.conf file. But, since I can't find it in /etc/apache2/ folder, I tried to locate it using this command: /usr/sbin/apache2 -V But the output I get is this: [Fri Nov 29 17:35:43.942472 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOCK_DIR} is not defined [Fri Nov 29 17:35:43.942560 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_PID_FILE} is not defined [Fri Nov 29 17:35:43.942602 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_USER} is not defined [Fri Nov 29 17:35:43.942613 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_RUN_GROUP} is not defined [Fri Nov 29 17:35:43.942627 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.947913 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948051 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined [Fri Nov 29 17:35:43.948075 2013] [core:warn] [pid 14655] AH00111: Config variable ${APACHE_LOG_DIR} is not defined AH00526: Syntax error on line 74 of /etc/apache2/apache2.conf: Invalid Mutex directory in argument file:${APACHE_LOCK_DIR} Line 74 of /etc/apache2/apache2.conf is this: Mutex file:${APACHE_LOCK_DIR} default I gave a look at my /etc/apache2/envvar file, but I don't know what to do with it. What should I do?

    Read the article

  • MacOSX: remove write-protect flag from file in Terminal

    - by Albert
    Hi, I have a file on a FAT32 volume which is shown as write-protected in Finder (so I cannot move it). Removing that write-protected flag in the information dialog works just fine. However, I have many more such files and I thus want to do it via Terminal. I already tried via 'chmod +w' but that didn't worked. 'ls -la' showed me that they are already just fine ("-rwxrwxrwx 1 az az " where az is my user account). Then I thought this might be stored in some xattr properties but 'xattr -l' didn't gave me any entry. Then I thought this might be some ACL setting (whereby I thought they would be stored as xattr but let's try it anyway) - and some Google search returned me something with 'chmod -a' or 'chmod -i' or so. All these tries only give me chmod: No ACL currently associated with file" or chmod: Failed to set ACL on file...: Operation not permitted". But I definitly have no write access to the file because I cannot move it or do any other change to it (in Terminal). Removing the write-access flag in Finder solves that.

    Read the article

  • On a failing hard drive, I am able to view data but unable to copy it - why?

    - by Tom
    I have a 2.5" external hard drive that is failing. It's not making the expected 'clicking' noise that most hard drives and I am able to view the data, but I am unable to actually retrieve the data. I attempted to use SpinRite in order to access the data on the drive, but it didn't like the external drive. When I view the drive's property page, the drive shows that it's used space is at 100% and that it has 0 bytes available; however, the progress indicator under the drive icon in Windows Explorer shows that it's roughly 50% full (which is correct). When I attempt to run Windows' "Error Checking" tool and attempt to "scan for an attempt recovery of bad sectors," the tool begins to run then immediately closes with no error message. I am able to browse the contents of the drive using Windows Explorer. When I begin to try copying any given single file, the copy process begins, an indicator starts, and then the copy fails with no real error message. The Disk Management page in Computer Management under Control Panel also shows this drive has being 'Healthy.' I dropped the drive off at a data recovery store and they said that "The data seems to be intact, but an internal failure is preventing any information from being retrieved." They offered to provide me references to a data recovery specialist. I've also attempted to run CHKDSK on the drive (with and without arguments) but it returns the following error: The type of the filesystem is RAW. CHKDSK is not available for RAW drives. Before going the route of more expensive data recovery, I'm wondering if these symptoms sound familiar to anyone? Other questions... I'm willing to continue trying tools such as TestDisk and/or PhotoRec (as the majority of the data that I'd like to salvage are photos) but how long I should expect either tool to run given approximately 400GB of data? I'm also comfortable using Linux so I welcome any suggestions for utilities or tools and strategies with which you've had success.

    Read the article

  • Gittornado with Nginx fails to push and pull

    - by Josh Buell
    I'm making a simple website to host git repositories, much like github. I'm using Gittornado to handle git Smart HTTP requests, and it works perfectly locally; I can clone, push, pull, etc... But when I put it behind Nginx, git commands stop working, giving no errors except: "fatal: The remote end hung up unexpectedly" I know that it's Nginx that's causing the trouble because if I open the port that tornado is running on and try my git commands through that (i.e. "git pull \http://mysite.com:8000/myrepository master" instead of "git pull \http://mysite.com/myrepository master" [backslashes added because Server Fault says I have too many links]) everything works as expected. The Nginx access and error logs don't seem to say anything interesting, so I'm reasonably sure that it has something to do with the way Nginx is compressing or chunking the requests/responses, causing git to think there's been an unexpected hangup, but I'm not sure what to do to fix it, since this is my first time with Nginx. My Nginx configuration file is basically a clone of the on found here; I've tried commenting out various likely-seeming options to see if they were causing the problem, but none of them fixed it so I assume there's some default behavior I need to suppress, I'm just not sure which. Any thoughts on how to fix this? Since it works not through Nginx, I'm considering just redirecting git requests to the tornado port itself, but this feels like a hack rather than a clean solution...

    Read the article

  • Sign multiple domains with single Domain Key (dk-filter)

    - by Lashae
    Motivation The private shopping website GILT, send periodical update emails from giltgroupe.bounce.ed10.net however all of the mails are signed with domain keys of giltgroupe.com. mailed-by giltgroupe.bounce.ed10.net signed-by giltgroupe.com My Story I couldn't manage to sign x.com with y.com 's domain key using dk-filter under Debian Lenny with postfix. If I try to init dk-filter service with following arguments: DAEMON_OPTS="$DAEMON_OPTS -d x.com,y.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" dk-filter service signs with domain x.com (d=x.com) If I change the daemon arg.s as following: DAEMON_OPTS="$DAEMON_OPTS -d x.com -c nofws -k -i /var/dk-filter/internal_hosts -s /etc/dk-keys.conf" then emails sent From y.com is not being signed. the dk-keys.conf file is as follows: *:/var/dk-filter/y.com/mail I managed to do same thing with DKIM, works perfect. However DK doesn't seem to work. I don't have any problem signing y.com's emails with y.com's key and x.com's emails x.com's key, which indicates there is no configuration problem. Do you have any experience/advice to make it possible to sign emails from multiple domains by a specific chosen domain?

    Read the article

  • Why won't 2GB of ram across 3 of 4 slots work on my motherboard (max 2GB)?

    - by Andrew
    My desktop is an old home-built machine circa 200[5-6] running Ubuntu 11.10 (but this is not relevant because I'm reading available ram from BIOS loading screen), with an ASUS P5GPL motherboard, not X or X-SE - it has four slots. I'm mainly a laptop person, but keep this around for running a server from if needed, backing up to, seeding Ubuntu to people from, etc… It has four (DDR) ram slots, two black and two blue, in the order black-blue-black-blue (I will call them D, C, B, and A, respectively) with some space in the middle. The blue ones are the closest to the processor. I used to have two 512MB chips in the two blue slots. I just got a 1GB chip and plugged it into one of the black slots; my system didn't recognize it. I messed around and discovered that it will not recognize chips in many positions, and I couldn't get it to recognize all three of these chips at the same time. In particular, if I put the 512MB chips in A and B it would only use 1, but AC, AD, BD, and CD worked. I didn't try BC, I believe. Only some of these continue to work when I switch the 1GB chip into one of these positions. Can I have some advice as to how to position these chips to get all 2GB used? How about if I get another 1GB chip - where should I put the two? And what about the RAM maximum Crucial says? Can I go above 2GB, if I get another 1GB chip? Right now, I have a 512MB chip in A and the 1GB chip in C. EDIT: I read some other posts and tried dmidecode in Ubuntu to clarify the max memory question, that wasn't a major part anyways. It says my max memory module size is 1024M (OK) and my max memory size is 4096M (doesn't agree with Crucial OR the Asus web site, maybe it will only work while in Linux and BIOS won't OK it?).

    Read the article

  • mod_jk problem: Tomcat is probably not started or is listening on the wrong port

    - by Konrad
    Hi, I am running some application on Tomcat 6.0.26. There is Apache in front of web server talking to it over mod_jk. Every few hours when I try to access application browser simply spins, and no content is retrieved. No error is reported in Tomcat logs, but I fond such errors in mod_jk log: [Sun Jul 04 21:19:13 2010][error] ajp_service::jk_ajp_common.c (1758): Error connecting to tomcat. Tomcat is probably not started or is listening on the wrong port. worker=***** failed [Sun Jul 04 21:19:13 2010][info] jk_handler::mod_jk.c (1985): Service error=0 for worker==***** [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][error] ajp_get_reply::jk_ajp_common.c (1503): Tomcat is down or refused connection. No response has been sent to the client (yet) [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 45 [Sun Jul 04 21:19:13 2010][info] ajp_connection_tcp_get_message::jk_ajp_common.c (955): Tomcat has forced a connection close for socket 46 [Sun Jul 04 21:19:13 2010][info] ajp_service::jk_ajp_common.c (1721): Receiving from tomcat failed, recoverable operation attempt=0 my worker is configured in following way: worker.admanagonode.port=8009 worker.admanagonode.host=*****.com worker.admanagonode.type=ajp13 worker.admanagonode.ping_mode=A worker.admanagonode.socket_timeout=60 worker.admanagonode.prepost_timeout=10000 worker.admanagonode.connect_timeout=10000 worker.admanagonode.connection_pool_size=200 worker.admanagonode.connection_pool_timeout=300 worker.admanagonode.retries=20 worker.admanagonode.socket_keepalive=1 worker.admanagonode.cachesize=10 worker.admanagonode.cache_timeout=600 Tomcat has same port number in Connector configuration: <Connector port="8009" protocol="AJP/1.3" redirectPort="8443" address="*********" /> Does any of you has any ideas what i am missing? What can cause such problems? Cheers Konrad

    Read the article

  • HP LaserJet 2550 has a carousel motor error

    - by Arlen Beiler
    I have a LaserJet 2550, and it's worked pretty good for a long time (except for some slowness a while back, spooling I think), but just recently it suddenly quit working. We moved this summer, but left it at our other place, and just recently when my Dad went over there to try to print something out, it didn't work. When you turn it on, you hear the fan give a false start (basically a quick pulse), and the carousel goes through its usual thing. Then it starts up in earnest like it's getting ready to print something. All of a sudden it just stops. Everything stops, and the three lower lights are steady. When I push the Go button, the Go light (bottom of the 3) turns off, but the other two stay on. I looked it up on the HP website and it says it is a carousel motor problem. I called HP, but they said it is out of warranty. I've opened the cover and held the switch with a screw driver so I could watch it, and it goes through its thing like I described (doesn't seem to make a difference whether the imaging drum is in or not), then when it stops it kind of seems to jump back a little bit (the carousel). I hope this all makes sense (I know you like details), and hopefully you also know what to do to fix it. Thanks.

    Read the article

  • Symbolic link not allowed or link target not accessible

    - by TK Kocheran
    I can't seem to get a symlink working in my Apache VirtualHost, no matter what I try and I see the following error in the error log: Symbolic link not allowed or link target not accessible: /var/www/carddesigner I can browse the actual symlink from Linux with no problems whatsoever: $ ls -l /var/www | grep "carddesigner" lrwxrwxrwx 1 rfkrocktk rfkrocktk 64 2011-02-28 16:52 carddesigner -> /home/rfkrocktk/Documents/Projects/Work/carddesigner/build/main/ Additionally, I've made sure that the my VirtualHost allows the FollowSymLinks option: /etc/apache2/sites-enabled/000-localhost: <VirtualHost 127.0.0.1:80> ServerAdmin ########## DocumentRoot /var/www <Directory /> Options FollowSymLinks AllowOverride None </Directory> <Directory /var/www/> Options Indexes FollowSymLinks MultiViews AllowOverride None Order allow,deny allow from all </Directory> ScriptAlias /cgi-bin/ /usr/lib/cgi-bin/ <Directory "/usr/lib/cgi-bin"> AllowOverride None Options +ExecCGI -MultiViews +SymLinksIfOwnerMatch Order allow,deny Deny from all </Directory> ErrorLog /var/log/apache2/error.log # Possible values include: debug, info, notice, warn, error, crit, # alert, emerg. LogLevel debug CustomLog /var/log/apache2/access.log combined Alias /doc/ "/usr/share/doc/" <Directory "/usr/share/doc/"> Options Indexes MultiViews FollowSymLinks AllowOverride None Order deny,allow Deny from all Allow from 127.0.0.0/255.0.0.0 ::1/128 </Directory> RewriteEngine On RewriteLog "/var/log/apache2/mod_rewrite.log" RewriteLogLevel 9 </VirtualHost> I can't seem to find any other configuration files that seem to override this and/or prevent symlinks from being loaded. Any ideas? Here are my permissions on the actual referenced files: $ ls -l ~/Documents/Projects/Work/carddesigner/build/main total 12 drwxrwxrwx 5 rfkrocktk rfkrocktk 4096 2011-02-28 16:11 advanced drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 core drwxrwxrwx 2 rfkrocktk rfkrocktk 4096 2011-02-28 16:10 simple Seems like the permissions are good to go, right?

    Read the article

  • Toshiba Qosmio: Battery Stuck at 60%, does not Charges, PC can't power up, can't remain on with out

    - by Fellknight
    Just like the tittle says, now let me try to give some more detail about the symptoms; The battery is stuck at 60 percent (68% at the moment of this writing).When hovering over the battery icon in Windows 7 Home Premium x64 it reads:"68% available (plugged in, charging)", there's no x or any sing the OS is displaying any error. No matter how much time left connected to the AC adapter the battery doesn't charge, it seems however it continues to discharge at its normal rate when disconnected from the laptop (about 1% each 2 weeks). Now this last symptom is the one i find most strange it "seems" the laptop somehow isn't recognizing the battery because even with the remaining charge of 60%(ish) the laptop wont power up or remain on if disconnected from its AC adapter(if it's on and is unplugged it will immediately turn off). Meaning that even with the battery attached correctly in its right place is as if running the laptop with no battery at all. Toshiba's Utilities haven't detected anything strange (or anything for that matter) with the battery or the hardware. The laptop when in use is connected 90% of the time to a Belkin surge protector (like my 1TB EHD). The protector is working correctly (green light on) and the 1TB HD too, thus a power surge having damaged it's very unlikely. Thnx in advance

    Read the article

  • How do I fix a corrupt calendar cache?

    - by Blacklight Shining
    I was tailing /var/log/system.log and noticed a sudden wall of text. Looking closer, I saw it was an error CalendarAgent got while trying to save something: Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: CoreData: error: (11) Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' Nov 18 11:42:45 rainbow-dash.local CalendarAgent[12321]: Core Data: annotation: -executeRequest: encountered exception = Fatal error. The database at /Users/blackl/Library/Calendars/Calendar Cache is corrupted. SQLite error code:11, 'database disk image is malformed' with userInfo = { NSFilePath = "/Users/blackl/Library/Calendars/Calendar Cache"; NSSQLiteErrorDomain = 11; } 2 messages repeated several times Nov 18 11:42:49 rainbow-dash.local CalendarAgent[12321]: [com.apple.calendar.store.log.subscription] [WARNING: CalSubscriptionSession :: persistError :: save failed] This entire sequence is repeated many times throughout the log. file said the file in question was a SQLite 3.x database, so I did a bit of searching and came up with a way to check those. blackl% cp -i ~/Library/Calendars/Calendar\ Cache /tmp blackl% sqlite3 /tmp/Calendar\ Cache SQLite version 3.7.12 2012-04-03 19:43:07 Enter ".help" for instructions Enter SQL statements terminated with a ";" sqlite> pragma integrity_check ; *** in database main *** Main freelist: Bad ptr map entry key=863 expected=(2,0) got=(5,21) On page 21 at right child: 2nd reference to page 863 This is followed by a few dozen lines like these: rowid <number> missing from index <name> and then: wrong # of entries in index <name> I'm at a bit of a loss as to what to do now—I couldn't find anything on how to fix the errors that I found. Also, it would probably be a good idea to disable Calendar Agent so it doesn't try to use the database while it's being fixed (that's why I copied it to /tmp before running sqlite3 on it.) How do I disable CalendarAgent and fix its cache?

    Read the article

  • Make Nginx fail when SSL certificate not present, instead of hopping to only available certificate

    - by Oli
    I've got a bunch of websites on a server, all hosted through nginx. One site has a certificate, the others do not. Here's an example of two sites, using (fairly accurate) representations of real configuration: server { listen 80; server_name ssl.example.com; return 301 https://ssl.example.com$request_uri; } server { listen 443 ssl; server_name ssl.example.com; } server { listen 80; server_name nossl.example.com; } SSL works on ssl.example.com great. If I visit http://nossl.example.com, that works great, but if I try to visit https://nossl.example.com (note the SSL), I get ugly warnings about the certificate being for ssl.example.com. By the sounds of it, because ssl.example.com is the only site listening on port 443, all requests are being sent to it, regardless of domain name. Is there anything I can do to make sure a Nginx server directive only responds to domains it's responsible for?

    Read the article

  • "ImportError: No module named flask" - Trouble with nginx + uWSGI + Flask in a virtualenv setup

    - by vjk2005
    I got nginx + uWSGI running on localhost inside a virtualenv with a simple hello world program, but I get this error when I replace the hello world with a simple Flask app: File "./wsgi_configuration_module.py", line 1, in <module> from flask import Flask ImportError: No module named flask unable to load app mountpoint Here's the flask app (wsgi_configuration_module.py): from flask import Flask application = Flask(__name__) @application.route("/") def hello(): return "hello world" if __name__ == "__main__": application.run() uWSGI config (app_conf.xml): <uwsgi> <socket>127.0.0.1:9001</socket> <chdir>/srv/www/labs/application</chdir> <pythonpath>/srv/www</pythonpath> <module>wsgi_configuration_module</module> <callable>application</callable> <no-site>true</no-site> </uwsgi> nginx config: server { listen 80; server_name localhost; access_log /srv/www/labs/logs/access.log; error_log /srv/www/labs/logs/error.log; location / { include uwsgi_params; uwsgi_pass 127.0.0.1:9001; } location /static { root /srv/www/labs/public_html/static/; index index.html index.htm; } } virtualenv stored in ~/virtual_env with Python 2.7 + nginx + uWSGI + Flask installed in a virtualenv called basic. Things I've tried to solve this: set the --home (-H) option to my virtualenv folder ~/virtual_env while running uWSGI. Other info: I have the same setup working outside of a virtualenv. Things go wrong only when I try to replicate the setup inside of a virtualenv. Where have I gone wrong?

    Read the article

  • How to set up Git on remote instance using keys from local machine?

    - by Lucas
    I have a setup where I can ssh into my remote server (ie a Google Compute instance) from my local machine. I used to be able to clone, push, and pull from a repository on my remote instance without adding any keys to my remote instance, nor adding any new keys to my repository online (just the public key from my local machine). I believe the remote instance was using the keys from my local machine to authenticate my Git pushes and pulls. However, the system broke when I reinstalled the OS on my local machine. Now I when I try to connect with the Github server from my remote instance, I get the following: Cannot clone: [lucas@ecoinstance]~/node$ git clone [email protected]:lucasExample/test.git test Cloning into 'test'... Permission denied (publickey). fatal: The remote end hung up unexpectedly Cannot push: [lucas@ecoinstance]~/node/nodetest1$ git status # On branch master # Your branch is ahead of 'origin/master' by 1 commit. # nothing to commit (working directory clean) [lucas@ecoinstance]~/node/nodetest1$ git push Permission denied (publickey). fatal: The remote end hung up unexpectedly Additional info: [lucas@ecoinstance]~/node/nodetest1$ ssh-add -l Could not open a connection to your authentication agent. [lucas@ecoinstance]~/.ssh$ ls authorized_keys known_hosts As you can see, I have no keys on my remote instance. I have never had keys on the remote, and it would push and pull just fine until I re-installed my local OS. I can still clone, push, and pull on my local machine, it is just my remote machine that cannot get authentication. My local OS is Ubuntu 14.04 and my remote OS is Debian Wheezy. Any suggestions would be great. I am not sure how to search for this concept where I can authenticate from a remote instance via my local machine, so any reference are appreciated as well.

    Read the article

  • Segmentation fault on login to mysql

    - by numberwhun
    Hello everyone! I recently did a fresh install of Ubuntu on my laptop (HP dv7, AMD Dual Core with 4 gigs RAM). I am working on installing my development environment and tools and one of the first things I was working on is getting MySQL installed. The following was my configure statement with options: ./configure --prefix=/usr/local/mysql --with-big-tables --with-unix-socket-path=/usr/local/mysql/tmp/mysql.sock --with-named-curses-libs=/lib/libncurses.so.5.7 After I did the make;make install, I did the post configuration such as setting the root password and installing the mysqld daemon in its rightful place. My issue is when I try to log in to mysql to start using it, the following shows what happens: $ mysql -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 1 Server version: 5.1.42 Source distribution Segmentation fault I have searched Google extensively, I have searched through the mysql bugs database and I have yet to find anything that matches my issue. Here is the contents of my my.cnf file, in case you want to see it: $ cat /etc/my.cnf [mysqld] basedir=/usr/local/mysql datadir=/usr/local/mysql socket=/usr/local/mysql/tmp/mysql.sock [mysql.server] user=mysql #basedir=/var/lib [client] socket=/usr/local/mysql/tmp/mysql.sock [mysqld_safe] err-log=/usr/local/mysql/logs/mysqld.log pid-file=/var/run/mysqld/mysqld.pid I am really hoping that someone here can tell me what has gone wrong with my installation as I would really love to know. I welcome and look forward to all responses. Thank you in advance! Best regards, Jeff

    Read the article

  • Connecting to RDS database from EC2 instance using bind9 CNAME alias

    - by mptre
    I'm trying to get internal DNS up and running on a EC2 instance. The main goal is to be able to define CNAME aliases for other AWS services. For example: Instead of using the RDS endpoint, which might change over time, an alias mysql.company.int can be used instead. I'm using bind9 and here's my config files: /etc/bind/named.conf.local zone "company.int" { type master; file "/etc/bind/db.company.int"; }; /etc/bind/db.company.int ; $TTL 3600 @ IN SOA company.int. company.localhost. ( 20120617 ; Serial 604800 ; Refresh 86400 ; Retry 2419200 ; Expire 604800 ) ; Negative Cache TTL ; @ IN NS company.int. @ IN A 127.0.0.1 @ IN AAAA ::1 ; CNAME mysql IN CNAME xxxx.eu-west-1.rds.amazonaws.com. The dig command ensures me my alias is working as excepted: $ dig mysql.company.int ... ;; ANSWER SECTION: mysql.company.int. 3600 IN CNAME xxxx.eu-west-1.rds.amazonaws.com. xxxx.eu-west-1.rds.amazonaws.com. 60 IN CNAME ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. ec2-yyy-yy-yy-yyy.eu-west-1.compute.amazonaws.com. 589575 IN A zzz.zz.zz.zzz ... As far as I can understand a reverse zone isn't needed for a simple CNAME alias. However when I try to connect to MySQL using my newly created alias the operation is giving me a timeout. $ mysql -uuser -ppassword -hmysql.company.int ERROR 2003 (HY000): Can't connect to MySQL server on 'mysql.company.int' (110) Any ideas? Thanks in advantage!

    Read the article

  • Dual booting Linux/Win7, Grub refuses to load Win7

    - by JohnB
    Decided to give Linux Mint a try (Ubuntu's interface annoys me), so I installed it with the intention of dual booting with Windows 7. Installation went fine, but now I can only boot into Linux Mint. Grub lists two Windows 7 menu options, but selecting either of them causes an "unknown file system" error and dumps me into a Grub recovery prompt. There, I have to manually reset the root and prefix options, as they reset hd0,msdos6 when they should be hd0,msdos5. I ran Boot Repair twice, once to fix grub errors, once to rebuild the MBR, but it didn't fix anything. Here is the log: http://paste.ubuntu.com/1029675/ fdisk output: Device Boot Start End Blocks Id System /dev/sda1 * 2048 206847 102400 7 HPFS/NTFS/exFAT /dev/sda2 206848 1486249145 743021149 7 HPFS/NTFS/exFAT /dev/sda3 1486249982 1953523711 233636865 5 Extended /dev/sda5 1486249984 1945141247 229445632 83 Linux /dev/sda6 1945143296 1953523711 4190208 82 Linux swap / Solaris grub.cfg: ### BEGIN /etc/grub.d/30_os-prober ### menuentry "Windows 7 (loader) (on /dev/sda1)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos1)' search --no-floppy --fs-uuid --set=root 86184D18184D091F chainloader +1 } menuentry "Windows 7 (loader) (on /dev/sda2)" --class windows --class os { insmod part_msdos insmod ntfs set root='(hd0,msdos2)' search --no-floppy --fs-uuid --set=root 56D84F84D84F60FB chainloader +1 } ### END /etc/grub.d/30_os-prober ### I have found a few similar troubleshooting guides so far, but so far no amount of updating/configuring Grub has been successful. Last resort is, I suppose, use the W7 recovery disc and start over. Thanks in advance! Linux Mint 13 Maya, 64-bit Windows 7 Home Edition, 64-bit

    Read the article

  • Nginx Server Block Port 8081 Path to Root Folder

    - by Pamela
    I'm trying to password protect all of port 8081 on my Nginx server. The only thing this port is used for is PhpMyAdmin. When I navigate to https://www.example.com:8081, I successfully get the default Nginx welcome page. However, when I try navigating to the PhpMyAdmin directory, https://www.example.com:8081/phpmyadmin, I get a "404 Not Found" page. Permission for my htpasswd file is set to 644. Here is the code for my server block: server { listen 8081; server_name example.com www.example.com; root /usr/share/phpmyadmin; auth_basic "Restricted Area"; auth_basic_user_file htpasswd; } I have also tried entirely commenting out #root /usr/share/phpmyadmin; However, it doesn't make any difference. Is my problem confined to using the incorrect root path? If so, how can I find the root path for PhpMyAdmin? If it makes any difference, I'm using Ubuntu 14.04.1 LTS with Nginx 1.4.6 and ISPConfig 3.0.5.4p3.

    Read the article

  • Follow cursor location from kile to evince.

    - by D Connors
    I know the title is probably not very clear, so I'll try to be as clear as possible here. I'm running xubuntu on my netbook, and I'm using kile for my latex editing. Since kile is native to kde, I had to manually set it to open pdfs and dvis on evince instead of okular. Now, last time I played around with LaTeX I was using TeXnic center on windows, and it had a very neat feature. Whenever I hit "QuickBuild", not only would it open the output .dvi file, but it would also show me exactly the piece of text I was editing. That is, if I were editing line 13 of the 7th of my document, when I compiled it, the dvi viewer would automatically take me to line 13 on the 7th page of the document, so I wouldn't have to scroll all the way down to it every time I compiled the .tex file. I'm guessing this is a pretty standard feature, and kile probably supports it. But since I don't know what it's called, I'm trying to be clear as to what I'm talking about. Problem is, this feature is not working for me right now, and I'm guessing it's either because evince does not support it, or because I have to manually configure it. Which one is it? And how do I manually configure it, if that's the case?

    Read the article

  • Samba between Ubuntu server 10.10 and Windows Vista, Windows 7

    - by chepukha
    Hi all, I have a linux box running Linux server ubuntu 10.10. I have installed Samba on this linux box and want to share files with my laptops which run Windows Vista home and Windows 7 home. I have been struggling with the setup for almost a month but couldn't get it right. If I try to access share folder from Windows Vista, I get message "Windows cannot access \\server_ip_address". Error code: 0x80070035. The network path was not found. If I access from Windows 7, then after entering password to login I can see the list of share folders on Linux box. But if I click on a share folder, I get the same error message as above. Tail /var/log/samba/log.windows7-pc I got the following message: [2011/03/16 00:17:41.427238, 0] smbd/service.c:988(make_connection_snum) canonicalize_connect_path failed for service sharemedia, path /root/sharemedia Here is my setting in smb.conf [global] share modes = yes netbios name = Samba workgroup = WORKGROUP wins support = yes encrypt passwords = true [sharemedia] comment = Tesing sharing using Samba path=/root/sharemedia/ public = yes valid users = samba_usr_name ; make sure all files are sensible permissions create mask = 0660 force create mask = 0660 directory mask = 2770 force directory mask = 2770 directory security mask = 0000 ; Normal share parameters read only = no browseable = yes writable = yes guest ok = no

    Read the article

  • NetBackup prefers "Scratch" tapes over dedicated tapes

    - by wfaulk
    I have a NetBackup 6.0MP7 installation running on Windows Server 2003. It functions as the only Master Server and Media Server. I swap a full set of tapes in and out every week, but leave a set of tapes with their Volume Pool set to "Scratch" in all the time. The weekly tape sets then get rotated back in after a period of time. Largely, this works fine. I seldom actually need the scratch tapes, but every once in a while, a backup will run over what I have dedicated to the task. However, one week's set of tapes consistently gets declined in favor of the scratch pool. The backup policies are the same for every week, they all have "Policy Volume Pool" set to "NetBackup", and all of the tapes for every week (beside the scratch tapes) have had their pools assigned as "NetBackup", definitely including the week that always gets ignored. That said, it doesn't ignore all of the NetBackup pool tapes for that week. It does usually write to two or three of them, but it writes to like 20 of the scratch tapes. (I haven't thought to look to see if it's always the same two or three tapes.) And this problem never seems to occur for any other week. It doesn't load the tapes and then reject them; it never seems to try to use them at all. They are not flagged as frozen. They are all active and unassigned when I swap them in. The tapes are in a Quantum PX510 tape library. The NetBackup server is attached to the library/robot via fibrechannel going through an HP-branded Brocade switch. I'm not an expert on NetBackup at all. I don't really even know where to look. Any advice on logs to look at or logging to enable or really anything at all would be appreciated. I'll keep an eye on the question and update it if anyone needs any more info to help.

    Read the article

  • Launching Installer Via Powershell and WinRM and Nothing Happens

    - by Nick DeMayo
    I'm currently working on a Powershell script to run some Microsoft Hotfix installers remotely on several Windows Server 2008 R2 servers that I manage. Basically, the script copies all the appropriate files up to the server, and then runs the installer via Invoke-Command, like so: function InstallCU { Write-Host "Installing June 2013 CU..." Invoke-Command -ComputerName $ServerName -ScriptBlock { Start-Process "c:\aaa\prjcusp2\ubersrvprj2010-kb2817530-fullfile-x64-glb.exe" -ArgumentList "/passive" } } If I run the "Start-Process" command locally on the server, the installer runs properly. However, when trying to run it remotely, nothing happens (actually, I can see the installer start up in Task Manager, but it closes a couple seconds later and doesn't run). I've attempted giving the Invoke-Command -Credentials, I've turned off UAC on the server, and I've ensured that my WinRM settings (running 'winrm quickconfig' and setting TrustedHosts to *) are correct. I've also tried having the Invoke-Command script run a local Powershell script to run the installer and changing the Argument from '/passive' to 'quiet' (in case it can't remotely launch something that has a UI), but again, no dice. Is there anything else I can try, or am I just not going to be able to do this?

    Read the article

  • Upgrading PEAR from 1.9.0 to 1.9.1 fails

    - by Skelton
    Hi All, I'm willing to install phpunit 5.3 with MAMP 1.9 and there for I need to upgrade PEAR to version 1.9.1. The current version installed is 1.9.0. When I try the to upgrade I get the following: sudo pear channel-update pear.php.net sudo pear upgrade pear Could not get contents of package "/Applications/MAMP/bin/php5.3/bin/pear". Invalid tgz file. upgrade failed When I force the upgrade It still doesn't work: sudo pear upgrade --force PEAR downloading PEAR-1.9.1.tgz ... Starting to download PEAR-1.9.1.tgz (293,587 bytes) .............................................................done: 293,587 bytes upgrade ok: channel://pear.php.net/PEAR-1.9.1 PEAR: Optional feature webinstaller available (PEAR's web-based installer) PEAR: Optional feature gtkinstaller available (PEAR's PHP-GTK-based installer) PEAR: Optional feature gtk2installer available (PEAR's PHP-GTK2-based installer) PEAR: To install optional features use "pear install pear/PEAR#featurename" sudo pear -V PEAR Version: 1.9.0 As bindbn suggested: sudo pear install --offline /Users/tom/Downloads/PEAR-1.9.1.tgz Ignoring installed package pear/PEAR Nothing to install sudo pear upgrade --force --alldeps PEAR downloading PEAR-1.9.1.tgz ... Starting to download PEAR-1.9.1.tgz (293,587 bytes) .............................................................done: 293,587 bytes upgrade ok: channel://pear.php.net/PEAR-1.9.1 PEAR: Optional feature webinstaller available (PEAR's web-based installer) PEAR: Optional feature gtkinstaller available (PEAR's PHP-GTK-based installer) PEAR: Optional feature gtk2installer available (PEAR's PHP-GTK2-based installer) PEAR: To install optional features use "pear install pear/PEAR#featurename" pear -V PEAR Version: 1.9.0 I hope someone can figure this out! Thanks!

    Read the article

< Previous Page | 922 923 924 925 926 927 928 929 930 931 932 933  | Next Page >