Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 684/838 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

  • Samba Server Make Multiple User Permissions Profiles

    - by Scriptonaut
    I have a Samba file server running, and I was wondering how I could make multiple user accounts that have different permissions. For example, at the moment I have a user, smbusr, but when I ssh to the share, I can read, write, execute, and even navigate out of the samba directory and do stuff on the actual computer. This is bad because I want to be able to give out my IP so friends/family can use the server, but I don't want them to be able to do just anything. I want to lock the user in the samba share directory(and all the sub directories). Eventually I would like several profiles such as (smbusr_R, smbusr_RW, smbguest_R, smbguest_RW). I also have a second question related to this, is SSH the best method to connect from other unix machines? What about VPN? Or simply mounting like this: mount -t ext3 -o user=username //ipaddr/share /mnt/mountpoint Is that mounting command above the same thing as a vpn? This is really confusing me. Thanks for the help guys, let me know if you need to see any files, or need anymore information.

    Read the article

  • Ubuntu Server running VNC

    - by xwapilot
    I have access to four computers: 1 Ubuntu Server desktop (Version 10.04) 1 Mac Mini (Snow Leopard) 1 Windows desktop (Windows 7) 1 Windows laptop (Windows Vista) The first three will always be on the home network. My goal is to SSH from the laptop into the server and be able, through VNC (or another remote desktop software), to control the windows and mac computers. The goal of this would be a slightly heightened network security over using VNC to directly access the mac or windows desktop. I have successfully used SSH to connect to the server, but have not been able to successfully implement the remote desktop connection. I would appreciate help doing so. Here's what I've done so far: As per instructions here: http://www.stuartellis.eu/articles/vnc-on-linux/ I installed the following: vnc4server – the main VNC server software vnc-java – enables access from Web browsers with Java support xvnc4viewer – a basic VNC viewer I then set up a password using the vncpasswd command. To attempt to connect to the mac, I followed directions I found in a thread at superuser. com and went to "System PreferencesSharing" and enabled "Screen Sharing". Subsequently, I tried entering the following commands into Ubuntu: vncviewer mac_ip_address::5904 vncviewer mac_ip_address:0 vncviewer mac_ip_address:1 They all returned the following: VNC Viewer Free Edition 4.1.1 for X - built Apr 9 2010 18:41:55 Copyright (C) 2002-2005 RealVNC Ltd. See http ://www .realvnc. com for information on VNC. vncviewer: unable to open display "" I'm sure I'm missing something important, but I'm not sure what it is. Do I need to have a GUI installed, or did that come with the vnc packages I installed?

    Read the article

  • how to get Geo::Coder::Many with cpan?

    - by mnemonic
    Ubuntu is installed for development of a Perl project. aptitude search Geo-Coder i libgeo-coder-googlev3-perl - Perl module providing access to Google Map Aptitude does not refer to Geo::Coder::Many cpan can not build it. sudo cpan Geo::Coder::Many Then: CPAN: Storable loaded ok (v2.27) Going to read '/home/jh/.cpan/Metadata' Database was generated on Wed, 16 Oct 2013 06:17:04 GMT Running install for module 'Geo::Coder::Many' Running make for K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz CPAN: Digest::SHA loaded ok (v5.61) CPAN: Compress::Zlib loaded ok (v2.033) Checksum for /home/jh/.cpan/sources/authors/id/K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz ok CPAN: File::Temp loaded ok (v0.22) CPAN: Parse::CPAN::Meta loaded ok (v1.4401) CPAN: CPAN::Meta loaded ok (v2.110440) CPAN: Module::CoreList loaded ok (v2.49_02) CPAN: Module::Build loaded ok (v0.38) CPAN.pm: Going to build K/KA/KAORU/Geo-Coder-Many-0.42.tar.gz Can't locate Geo/Coder/Many/Google.pm in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. Can't locate Geo/Coder/Many/Google in @INC (@INC contains: /etc/perl /usr/local/lib/perl/5.14.2 /usr/local/share/perl/5.14.2 /usr/lib/perl5 /usr/share/perl5 /usr/lib/perl/5.14 /usr/share/perl/5.14 /usr/local/lib/site_perl .) at /usr/share/perl/5.14/Module/Load.pm line 27. BEGIN failed--compilation aborted at Build.PL line 54. Warning: No success on command[/usr/bin/perl Build.PL --installdirs site] CPAN: YAML loaded ok (v0.77) KAORU/Geo-Coder-Many-0.42.tar.gz /usr/bin/perl Build.PL --installdirs site -- NOT OK Running Build test Make had some problems, won't test Running Build install Make had some problems, won't install Could not read metadata file. Falling back to other methods to determine prerequisites Any suggestions how to resolve this issue?

    Read the article

  • How can I install Satchmo?

    - by Jonathan Hayward
    I am trying to install Satchmo 0.9 on an Ubuntu 9.10 guest off of the instructions at http://bitbucket.org/chris1610/satchmo/downloads/Satchmo.pdf. I run into difficulties at 2.1.2: pip install -r http://bitbucket.org/chris1610/satchmo/raw/tip/scripts/requirements.txt pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo The first command fails because a compile error for how it's trying to build PIL. So I ran an "aptitude install python-imaging", locally copy the first line's requirements.text, and remove the line that's unsuccessfully trying to build PIL. The first line completes without error, as does the second. The next step tells me to change directory to the /path/to/new/store, and run: python clonesatchmo.py A little bit of trouble here; I am told that clonesatchmo.py will be in /bin by now, and it isn't there, but I put some Satchmo stuff under /usr/local, create a symlink in /bin, and run: python /bin/clonesatchmo.py This gives: jonathan@ubuntu:~/store$ python /bin/clonesatchmo.py Creating the Satchmo Application Traceback (most recent call last): File "/bin/clonesatchmo.py", line 108, in <module> create_satchmo_site(opts.site_name) File "/bin/clonesatchmo.py", line 47, in create_satchmo_site import satchmo_skeleton ImportError: No module named satchmo_skeleton A find after apparently checking out the repository reveals that there is no file with a name like satchmo*skeleton* on my system. I thought that bash might be prone to take part of the second pip invocation's URL as the beginning of a comment; I tried both: pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9\#egg=satchmo pip install -e hg+http://bitbucket.org/chris1610/satchmo/@v0.9#egg=satchmo Neither way of doing it seems to take care of the import error mentioned above. How can I get a Satchmo installation under Ubuntu, or at least enough of a Satchmo installation that I am able to start with a skeleton of a store and then flesh it out the way I want? Thanks, Jonathan

    Read the article

  • Cygwin - Repo with Separate Git/Working Dir Doesn't Work

    - by Kyle Lacy
    Since I've switched to OS X and Vim, I've found it easiest to manage all of my 'dotfiles' (all of my configuration files and miscellaneous scripts) with Git. Having already set up my dotfiles in a repo following this tutorial, I figured it would also be easy enough to migrate all of my settings into my Cygwin setup on my Windows partition. Already having the repo setup on Github, I simply clone'd the repo, and moved all of the files over to my home directory, making it a mirror of my OS X home directory. Unfortunately, I cannot seem to use the actual repo any further within Cygwin. The problem is that I cannot use my dotfiles repo with git within Cygwin. The setup is unique from most normal git repos, in that the working directory and the git directory are in different locations. Specifically, the working directory is $HOME (/Users/kyle on OS X, /home/kyle in Cygwin), and the git repo is $HOME/.dotfiles.git. So, if I wanted to get the status of the repo, for example, I would type the following command (which I alias to reduce typing, of course): git --work-tree=$HOME --git-dir=$HOME/.dotfiles.git status -uno While this works fine on OS X, this refuses to work within Cygwin. Regardless of whether or not I use my alias, or whether or not I substitute $HOME by hand, I get the following git error: fatal: Not a git repository: /home/Kyle/dotfiles/.git/modules/.build/git I don't understand where this error comes from, but the path /home/Kyle/dotfiles was the original location of the git repo when I initially cloned it. Additionally, it's important to note that the repo relies heavily on submodules. If specifics are necessary, the repo in question can be found on GitHub. The commands I ran to setup the repo in Cygwin can also be found within the Readme file.

    Read the article

  • Reverse SSH Tunnel

    - by chris
    I am trying to forward web traffic from a remote server to my local machine in order to test out some API integration (tropo, paypal, etc). Basically, I'm trying to setup something similar to what tunnlr.com provides. I've initiated the ssh tunnel with the command $ssh –nNT –R :7777:localhost:5000 user@server Then I can see that server has is now listening on port 7777 with user@server:$netstat -ant | grep 7777 tcp 0 0 127.0.0.1:7777 0.0.0.0:* LISTEN tcp6 0 0 ::1:7777 :::* LISTEN $user@server:curl localhost:7777 Hello from local machine So that works fine. The curl request is actually served from the local machine. Now, how do I enable server.com:8888 to be routed through that tunnel? I've tried using nginx like so: upstream tunnel { server 0.0.0.0:7777; } server { listen 8888; server_name server.com; location / { access_log /var/log/nginx/tunnel-access.log; error_log /var/log/nginx/tunnel-error.log; proxy_pass http://tunnel; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; } } From the nginx error log I see: [error] 11389#0: *1 connect() failed (111: Connection refused) I've been looking at trying to use iptables, but haven't made any progress. iptables seems like a more elegant solution than running nginx just for tunneling. Any help is greatly appreciated. Thanks!

    Read the article

  • How to permanently "renice" a process on Mac OS X (or iOS, etc)?

    - by mralexgray
    I use a nice (free) process manager called ATMonitor for Mac OS X that has a lot of cool hidden features... one of which is being able to click on a running process.. and set the "renice" from +20 (less priority) to -20 (highest priority). The best part.... it sticks between restarts... SO you want XYZ to get full attention all the time.. you set it once and it's done... I want to do the same thing (renice a process) on an iPad running a particular daemon.. and I don't know how to set a renice permanently. I can do it once, and it works fine... But the setting is lost on a reboot. I read somewhere.. Now, as for permanently resetting the priority of a process, this can't be done directly. You can fake it, however, with a shell script that starts the app and then immediately renice's it. Give that script a ".command" extension and it will be double-clickable in the GUI. Not very elegant, but it gets the job done. But as it says.. not very elegant, and I dont think this is how ATMonitor does it.... I found this thread.... http://superuser.com and they gave a way to do it as a launch argument, but no apparent way to save it as a persistent value... for instance - if the program wasn't going to be started by launchd... How do I set a permanent renice level, per executable binary, independent of it's PID, when, how or why it was launched?

    Read the article

  • centos postfix send email problem

    - by Catalin
    I have a big problem with postfix. I can receive mail in webmin and outlook but I can't send (only on local I can - user to user). Dovecot is working just fine. Sendmail is disable. Please help me. postfix -n postfix: invalid option -- n postfix: fatal: usage: postfix [-c config_dir] [-Dv] command [root@xprivatecams usr]# postconf -n alias_database = hash:/etc/aliases alias_maps = hash:/etc/aliases broken_sasl_auth_clients = yes command_directory = /usr/sbin config_directory = /etc/postfix daemon_directory = /usr/libexec/postfix debug_peer_level = 2 home_mailbox = Maildir/ html_directory = no inet_interfaces = all mail_owner = postfix mailbox_command = mailq_path = /usr/bin/mailq.postfix manpage_directory = /usr/share/man milter_default_action = acceptsmtpd_tls_auth_only = no milter_protocol = 2 mydestination = $myhostname, localhost.$mydomain, localhost myhostname = xprivatecams.com mynetworks = 94.177.41.0/24, 127.0.0.0/8 newaliases_path = /usr/bin/newaliases.postfix non_smtpd_milters = inet:localhost:20207 queue_directory = /var/spool/postfix readme_directory = /usr/share/doc/postfix-2.3.3/README_FILES sample_directory = /usr/share/doc/postfix-2.3.3/samples sendmail_path = /usr/sbin/sendmail.postfix setgid_group = postdrop smtp_tls_note_starttls_offer = yes smtp_use_tls = yes smtpd_milters = inet:localhost:20207 smtpd_recipient_restrictions = permit_sasl_authenticated,permit_mynetworks,reject_unauth_destination smtpd_sasl_auth_enable = yes smtpd_sasl_authenticated_header = yes smtpd_sasl_local_domain = smtpd_sasl_security_options = noanonymous smtpd_tls_CAfile = /etc/postfix/ssl/cacert.pem smtpd_tls_auth_only = no smtpd_tls_cert_file = /etc/postfix/ssl/smtpd.crt smtpd_tls_key_file = /etc/postfix/ssl/smtpd.key smtpd_tls_loglevel = 1 smtpd_tls_received_header = yes smtpd_tls_session_cache_timeout = 3600s smtpd_use_tls = yes tls_random_source = dev:/dev/urandom unknown_local_recipient_reject_code = 550 Jan 18 00:46:17 xprivatecams postfix/postfix-script: starting the Postfix mail system Jan 18 00:46:17 xprivatecams postfix/master[15545]: daemon started -- version 2.3.3, configuration /etc/postfix Jan 18 00:48:00 xprivatecams postfix/pickup[15546]: EDE7EA8001B: uid=0 from=<[email protected]> Jan 18 00:48:00 xprivatecams postfix/cleanup[15817]: EDE7EA8001B: message-id=<[email protected]> Jan 18 00:48:00 xprivatecams opendkim[2776]: EDE7EA8001B: DKIM-Signature header added Jan 18 00:48:01 xprivatecams postfix/qmgr[15547]: EDE7EA8001B: from=<[email protected]>, size=615, nrcpt=1 (queue active) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: connect to mail.flabell.com[72.47.224.75]: Connection timed out (port 25) Jan 18 00:48:31 xprivatecams postfix/smtp[15820]: EDE7EA8001B: to=<[email protected]>, relay=none, delay=30, delays=0.08/0.03/30/0, dsn=4.4.1, status=deferred (connect to mail.flabell.com[72.47.224.75]: Connection timed out) telnet 94.177.41.70 25 Trying 94.177.41.70... Connected to xprivatecams.com (94.177.41.70). Escape character is '^]'. 220 xprivatecams.com ESMTP Postfix ehlo me 250-xprivatecams.com 250-PIPELINING 250-SIZE 10240000 250-VRFY 250-ETRN 250-STARTTLS 250-AUTH LOGIN PLAIN 250-AUTH=LOGIN PLAIN 250-ENHANCEDSTATUSCODES 250-8BITMIME 250 DSN

    Read the article

  • Can compressing Program Files save space *and* give a significant boost to SSD performance?

    - by Christopher Galpin
    Considering solid-state disk space is still an expensive resource, compressing large folders has appeal. Thanks to VirtualStore, could Program Files be a case where it might even improve performance? Discovery In particular I have been reading: SSD and NTFS Compression Speed Increase? Does NTFS compression slow SSD/flash performance? Will somebody benchmark whole disk compression (HD,SSD) please? (may have to scroll up) The first link is particularly dreamy, but maybe head a little too far in the clouds. The third link has this sexy semi-log graph (logarithmic scale!). Quote (with notes): Using highly compressable data (IOmeter), you get at most a 30x performance increase [for reads], and at least a 49x performance DECREASE [for writes]. Assuming I interpreted and clarified that sentence correctly, this single user's benchmark has me incredibly interested. Although write performance tanks wretchedly, read performance still soars. It gave me an idea. Idea: VirtualStore It so happens that thanks to sanity saving security features introduced in Windows Vista, write access to certain folders such as Program Files is virtualized for non-administrator processes. Which means, in normal (non-elevated) usage, a program or game's attempt to write data to its install location in Program Files (which is perhaps a poor location) is redirected to %UserProfile%\AppData\Local\VirtualStore, somewhere entirely different. Thus, to my understanding, writes to Program Files should primarily only occur when installing an application. This makes compressing it not only a huge source of space gain, but also a potential candidate for performance gain. Testing The beginning of this post has me a bit timid, it suggests benchmarking NTFS compression on a whole drive is difficult because turning it off "doesn't decompress the objects". However it seems to me the compact command is perfectly capable of doing so for both drives and individual folders. Could it be only marking them for decompression the next time the OS reads from them? I need to find the answer before I begin my own testing.

    Read the article

  • Why RSA SSH authentication only works after console log-in?

    - by smorhaim
    I setup RSA authentication on one of my Ubuntu servers, however after every restart, I can't log-in via ssh RSA. In order to log-in with ssh I need to first log-in via console, then the RSA starts working. Why??? Below are my sshd config file as well as an output from the ssh -vv command before console log-in and after. . Before console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7ff8d8c242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7ff8d8c24cf0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/id_rsaadmin debug2: we sent a publickey packet, wait for reply debug1: Authentications that can continue: publickey debug2: we did not send a packet, disable method debug1: No more authentication methods to try. Permission denied (publickey). After console log-in: debug1: SSH2_MSG_SERVICE_ACCEPT received debug2: key: /Users/smorhaim/.ssh/smorhaim (0x7f91c14242c0) debug2: key: /Users/smorhaim/.ssh/id_rsaadmin (0x7f91c1424ae0) debug1: Authentications that can continue: publickey debug1: Next authentication method: publickey debug1: Offering RSA public key: /Users/smorhaim/.ssh/smorhaim debug2: we sent a publickey packet, wait for reply debug1: Server accepts key: pkalg ssh-rsa blen 279 debug2: input_userauth_pk_ok: fp b1:d5:90:43:be:43:52:a9:7f:05:c7:04:86:57:b3:ff debug1: Authentication succeeded (publickey). Authenticated to 10.10.30.151 ([10.10.30.151]:22). sshd config: Port 22 Protocol 2 ListenAddress 10.10.30.151 UsePrivilegeSeparation yes SyslogFacility AUTHPRIV PermitRootLogin no PasswordAuthentication no ChallengeResponseAuthentication no UsePAM yes X11Forwarding yes

    Read the article

  • Downloading a file from the internet with '&' in URL using wget

    - by matt_tm
    Hi, I'm trying to download a file from a URL that looks like this: http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Within the browser, this link prompts me to download a file called x.pdf irrespective of what DEF is (but 'x.pdf' is the right content). However using wget, I get the following: >wget.exe http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-01-06 07:52:05-- http://pdf.example.com/filehandle.ashx?p1=ABC Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2011-01-06 07:52:08 ERROR 500: Internal Server Error. 'p2' is not recognized as an internal or external command, operable program or batch file. This is on a Windows Vista system Edit1 >wget.exe "http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf" SYSTEM_WGETRC = c:/progra~1/wget/etc/wgetrc syswgetrc = C:\Program Files\GnuWin32/etc/wgetrc --2011-02-06 10:18:31-- http://pdf.example.com/filehandle.ashx?p1=ABC&p2=DEF.pdf Resolving pdf.example.com... 99.99.99.99 Connecting to pdf.example.com|99.99.99.99|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 4568 (4.5K) [image/JPEG] Saving to: `filehandle.ashx@p1=ABC&p2=DEF.pdf' 100%[======================================>] 4,568 --.-K/s in 0.1s 2011-02-06 10:18:33 (30.0 KB/s) - `filehandle.ashx@p1=ABC&p2=DEF.pdf' saved [4568/4568]

    Read the article

  • Drupal install and permissions

    - by Richard
    So I'm really stuck on this issue. An install process is complaining about write permission on settings.php and sites/default/files/. However, I've moved these files temporarily to write/read (chmod 777) and changed the owner/group to "apache" as shown below. -bash-4.1$ ls -hal total 28K drwxrwxrwx. 3 richard richard 4.0K Aug 23 15:03 . drwxr-xr-x. 4 richard richard 4.0K Aug 18 14:20 .. -rwxrwxrwx. 1 apache apache 9.3K Mar 23 16:34 default.settings.php drwxrwxrwx. 2 apache apache 4.0K Aug 23 15:03 files -rwxrwxrwx. 1 apache apache 0 Aug 23 15:03 settings.php However, the install is still complaining about write permissions. I followed steps one and two of the INSTALL.txt but no luck. Update: To further explore the situation, I created sites/default/richard.php with the following code: <?php error_reporting(E_ALL); ini_set('display_errors', '1'); mkdir('files'); print("<hr> User is "); passthru("whoami"); passthru("pwd"); ?> Run from the command line (under user "richard"), no problem. The folder is created everything is a go. Run from the web, I get the following: Warning: mkdir(): Permission denied in /var/www/html/sites/default/richard.php on line 9 User is apache /var/www/html/sites/default Update 2: Safe mode appears to be off... -bash-4.1$ cat /etc/php.ini | grep safe | grep mode | grep -v \; safe_mode = Off safe_mode_gid = Off safe_mode_include_dir = safe_mode_exec_dir = safe_mode_allowed_env_vars = PHP_ safe_mode_protected_env_vars = LD_LIBRARY_PATH sql.safe_mode = Off

    Read the article

  • ip-up does not trigger when using built-in cisco vpn on mac osx lion

    - by Yasser Sobhdel
    I am using Cisco VPN client over lion and I want to make the ip-up and ip-down work. There is no sign of any action taken when I connect or disconnect this VPN connection. I really doubt whether the syntax has been changed or even this kind if connection is triggering the ip-up. Logically, it must be set over ppp but when using the following codes and instructions on them, there is no sign of any output in the log file: http://www.macfreek.nl/mindmaster/Modify_PPTP_Routing_Table http://www.aidanfindlater.com/use-vpn-for-specific-sites-on-mac-os-x Going for error, which there is no sign of it, using the following page: http://hints.macworld.com/article.php?story=20060616150640529 I couldn't find the /var/log/ppp/vpnd.log log file. Also the files are given full permission 0755 or a+x or even 777 using the following command: sudo chmod a+x /etc/ppp/ip-up Any clue on how to debug this would be appreciated. I am totally confused, netstat -rn -f inet doesn't show the routes. Even when the routes are added manually, closing the VPN connection does not run the ip-down and the routes must be deleted manually.

    Read the article

  • ZFS - destroying deduplicated zvol or data set stalls the server. How to recover?

    - by ewwhite
    I'm using Nexentastor on a secondary storage server running on an HP ProLiant DL180 G6 with 12 Midline (7200 RPM) SAS drives. The system has an E5620 CPU and 8GB RAM. There is no ZIL or L2ARC device. Last week, I created a 750GB sparse zvol with dedup and compression enabled to share via iSCSI to a VMWare ESX host. I then created a Windows 2008 file server image and copied ~300GB of user data to the VM. Once happy with the system, I moved the virtual machine to an NFS store on the same pool. Once up and running with my VMs on the NFS datastore, I decided to remove the original 750GB zvol. Doing so stalled the system. Access to the Nexenta web interface and NMC halted. I was eventually able to get to a raw shell. Most OS operations were fine, but the system was hanging on the zfs destroy -r vol1/filesystem command. Ugly. I found the following two OpenSolaris bugzilla entries and now understand that the machine will be bricked for an unknown period of time. It's been 14 hours, so I need a plan to be able to regain access to the server. http://bugs.opensolaris.org/bugdatabase/view_bug.do?bug_id=6924390 and http://bugs.opensolaris.org/bugdatabase/view_bug.do;jsessionid=593704962bcbe0743d82aa339988?bug_id=6924824 In the future, I'll probably take the advice given in one of the buzilla workarounds: Workaround Do not use dedupe, and do not attempt to destroy zvols that had dedupe enabled. Update: I had to force the system to power off. Upon reboot, the system stalls at Importing zfs filesystems. It's been that way for 2 hours now.

    Read the article

  • Insufficient storage available to create shadow copy

    - by Bob.at.SBS
    I have used the "Windows 7 File Recovery" tool under Windows 8 to create system image backups to an external USB hard drive. I built a new Windows 8.1 machine, and I want to create my first system image backup of that machine to the same USB hard drive. The "Windows 7 File Recovery" tool is gone in Windows 8.1, but wbAdmin is alive and well: wbAdmin start backup -backupTarget:\\?\Volume{2a2b...994f} -allCritical -quiet fails with this text displayed: wbadmin 1.0 - Backup command-line tool (C) Copyright 2013 Microsoft Corporation. All rights reserved. Retrieving volume information... This will back up (EFI System Partition),(C:),Recovery (300.00 MB) to \?\Volume {2a2b1255-3a86-11e3-be86-b8ca3a83994f}. The backup operation to F: is starting. Creating a shadow copy of the volumes specified for backup... Summary of the backup operation: The backup operation stopped before completing. The backup operation stopped before completing. Detailed error: ERROR - A Volume Shadow Copy Service operation error has occurred: (0x8004231f) Insufficient storage available to create either the shadow copy storage file or other shadow copy data. The EFI System Partition is 100 MB The Recovery Partition is 300 MB The C partition is 1.72 TB, NTFS, 218 GB used, 1.51 TB free The destination drive is 1.81 TB, NTFS, 678 GB used, 1.15 TB free I've fiddled with vssadmin resize shadowstorage, with no change in the error. vssadmin list shadowstorage displays: Shadow Copy Storage association For volume: (C:)\?\Volume{37a0...263}\ Shadow Copy Storage volume: (C:)\?\Volume{37a0...263}\ Used Shadow Copy Storage space: 2.39 GB (0%) Allocated Shadow Copy Storage space: 2.81 GB (0%) Maximum Shadow Copy Storage space: 531 GB (30%) Shadow Copy Storage association For volume: (F:)\?\Volume{2a2...94f}\ Shadow Copy Storage volume: (F:)\?\Volume{2a2...94f}\ Used Shadow Copy Storage space: 334 GB (17%) Allocated Shadow Copy Storage space: 337 GB (18%) Maximum Shadow Copy Storage space: UNBOUNDED (922154758%) (Yeah, the "percent calculation" for UNBOUNDED is seriously bogus.) I've run SFC /verifyonly and it seems happy. I've verified that the new `Volume Shadow Copy" service starts when I start the backup operation. Any suggestions?

    Read the article

  • LDAP change user pass on client

    - by Sean
    I am trying to allow ldap users to change their password on client machines. I have tried pam every which way I can think of /etc/ldap.conf & /etc/pam_ldap.conf, as well. At this point I'm stuck. Client: Ubuntu 11.04 Server: Debian 6.0 The current output is this: sobrien4@T-E700F-1:~$ passwd passwd: Authentication service cannot retrieve authentication info passwd: password unchanged /var/log/auth.log gives this during the command: May 9 10:49:06 T-E700F-1 passwd[18515]: pam_unix(passwd:chauthtok): user "sobrien4" does not exist in /etc/passwd May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: ldap_simple_bind Can't contact LDAP server May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: reconnecting to LDAP server... May 9 10:49:06 T-E700F-1 passwd[18515]: pam_ldap: ldap_simple_bind Can't contact LDAP server getent passwd |grep sobrien4 (note keeping short since testing with that account, however it outputs all ldap users): sobrien4:Ffm1oHzwnLz0U:10000:12001:Sean O'Brien:/home/sobrien4:/bin/bash getent group shows all ldap groups. /etc/pam.d/common-password (Note this is just the most current, I have tried a lot of different options): password required pam_cracklib.so retry=3 minlen=8 difok=3 password [success=1 default=ignore] pam_unix.so use_authtok md5 password required pam_ldap.so use_authtok password required pam_permit.so Popped open wireshark as well, the server & client are talking. I have the password changing working on the server. I.E. the server that runs slapd, I can log in with the ldap user and change the passwords. I tried copying the working configs from the server initially and no dice. I also tried cloning it, and just changing ip & host, and no go. My guess is that the client is not authorized by ip or hostname to change a pass. Pertaining to the slapd conf, I saw this in a guide and tried it: access to attrs=loginShell,gecos by dn="cn=admin,dc=cengineering,dc=etb" write by self write by * read access to * by dn="cn=admin,dc=cengineering,dc=etb" write by self write by * read So ldap seems to be working okay, just can't change the password.

    Read the article

  • Recovering a VHD after resizing it using VBoxManage

    - by tjrobinson
    I am using VirtualBox 4.1.18 and had a virtual machine running Windows 8 RC with a single VHD, which was initially sized at 25GB (too small!). After installing the OS and some applications I ran out of disk space so shut down the guest and then used this command to resize the VHD to 80GB: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe modifyhd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" --resize 81920 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" UUID: 03fb26e7-d8bb-49b5-8cc2-1dc350750e64 Accessible: yes Logical size: 81920 MBytes Current size on disk: 24954 MBytes Type: normal (base) Storage format: VHD Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd So far so good. However, when I started the guest up again I got the dreaded: Fatal: No bootable medium found! system halted If I boot into GParted it shows a single 80GB drive as "unallocated". The option to scan for and attempt to repair a filesystem doesn't find anything. I also tried cloning the VHD into a VDI file, just in case that magically fixed it: C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe clonehd "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vhd" "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" --format VDI 0%...10%...20%...30%...40%...50%...60%...70%...80%...90%...100% Clone hard disk created in format 'VDI'. UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b C:\Program Files\Oracle\VirtualBox> .\VBoxManage.exe showhdinfo "D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi" UUID: baf0c2c4-362f-4f6c-846a-37bb1ffc027b Accessible: yes Logical size: 81920 MBytes Current size on disk: 24798 MBytes Type: normal (base) Storage format: VDI Format variant: dynamic default In use by VMs: Windows 8 RC (UUID: a6e6aa57-2d3a-421b-8042-7aae566e3e0b) Location: D:\VirtualBox VMs\Windows 8 RC\Windows 8 RC.vdi Is there anything else I could try to recover the drive? No, I don't have a backup :( My host OS is Windows 7 64-bit.

    Read the article

  • Redmine subversion won't ignore certificate error even if told

    - by Pekka
    I have set up a copy of Redmine through the Bitnami Redmine Stack and am having trouble accessing a remote SVN repository through https. The trouble seems to be related to the fact that I don't have a signed certificate, and the certificate provided doesn't match the host name (I am accessing the same server through a number of host names). I am new to Ruby, Mongrel, Rails and Redmine. Following the advice in this forum thread, I changed the path Redmine uses to invoke the svn client in \apps\redmine\lib\ redmine\scm\adapters\subversion_adapter.rb from SVN_BIN = "svn" to SVN_BIN = "svn --trust-server-cert --non-interactive --config-dir c:/user/temp" I was hoping that the --trust-server-cert option would fix the certificate problem. However, I am still getting the following error message in mongrel.log: svn: OPTIONS of 'https://server.xyz:8443/svn/reponame': Server certificate verification failed: certificate issued for a different hostname, issuer is not trusted (https://server.xyz:8443) Does anybody know what to do about this? Additional info: I re-started the mongrel service after each change I am sure the configuration change has taken effect because subversion has created a full configuration directory in c:\user\temp I can access the remote repository using command line svn no problem The remote repository runs on a Windows box with VisualSVN

    Read the article

  • MySQL remote access not working - Port Close?

    - by dave.zap
    I am not able to get a remote connection established to MySQL. From my pc I am able to telnet to 3306 on the existing server, but when I try the same with the new server it hangs for few minutes then returns # mysql -utest3 -h [server ip] -p Enter password: ERROR 2003 (HY000): Can't connect to MySQL server on '[server ip]' (110) Here is some output from the server. # nmap -sT -O localhost -p 3306 ... PORT STATE SERVICE 3306/tcp closed mysql ... # netstat -anp | grep mysql tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 2 [ ACC ] STREAM LISTENING 12286 6349/mysqld /DATA/mysql/mysql.sock # netstat -anp | grep 3306 tcp 0 0 [server ip]:3306 0.0.0.0:* LISTEN 6349/mysqld unix 3 [ ] STREAM CONNECTED 3306 1411/audispd # lsof -i TCP:3306 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 6349 mysql 10u IPv4 12285 0t0 TCP [domain]:mysql (LISTEN) I am running... OS CentOS release 5.8 (Final) mysql 5.5.28 (Remi) Note: Internal connections to mysql work fine. I have disabled IPtables, the box has no other firewall, it runs Apache on port 80 and ssh no problem. Had followed this tutorial - http://www.cyberciti.biz/tips/how-do-i-enable-remote-access-to-mysql-database-server.html I have bound the IP address in my.cnf user=mysql bind-address = [sever ip] port=3306 I even started over by deleting the mysql folder in my datastore and running mysql_install_db --datadir=/DATA/mysql --force Then recreated all the users as per the manual... http://dev.mysql.com/doc/refman/5.5/en/adding-users.html I have created one test user CREATE USER 'test'@'%' IDENTIFIED BY '[password]'; GRANT ALL PRIVILEGES ON *.* TO 'test'@'%' WITH GRANT OPTION; FLUSH PRIVILEGES; So all I can see is that the port is not really open. Where else might I look? thanks

    Read the article

  • Adding tables to a herd in bucardo

    - by Joseph the Dreamer
    Forgive my ignorance, I am a JS programmer given the task to do DB replication using bucardo. I understand the concept of how bucardo works, but setting it up is a bit confusing. The set-up is: Lubuntu Linux Two databases test_master and test_slave, using PostgreSQL Each DB has a table named test, containing 2 columns: id (PK) and test (int) I use pgAdmin3 I have already added them to bucardo's list of databases and added all tables. Table: public.test DB: test_slave PK: id (int4) Table: public.test DB: test_master PK: id (int4) As you see, due to the fact that the DBs are identical, even the schema names are identical. So when I do: bucardo_ctl add herd sample_herd public.test Ok, so it got added to the herd. But this command gets confused which database public.test comes from. So when I add a sync: $ bucardo_ctl add sync sample_sync source=sample_herd targetdb=test_slave type=fullcopy Failed to add sync: DBD::Pg::st execute failed: ERROR: Source and target databases cannot be the same: test_slave at line 118. at line 30. CONTEXT: PL/Perl function "validate_sync" at /usr/bin/bucardo_ctl line 3362. What does it mean that source and target cannot be the same? If it got confused as to which public.test to use as source, how do I differentiate?

    Read the article

  • rr.com appended to URL and I don't know why

    - by Steph
    I've been having some pretty bad and intermittent internet connectivity issues. I keep getting timeouts on my browsers, or what appears to be timeouts on my browsers. In Chrome it's generally error 21. FF it times out. And so on. While this is happening, I can go into the command line and ping or traceroute the same domain and it works fine. I use my cellphone on the same network and it's fine connecting to the same domain. And when it fails, it's all domains that are down in all browsers, chrome, FF, etc. I also noticed that when I try to connect to 192.168.1.1 it says in chrome did you mean www.192.168.1.1rr.com but when I enter https://192.168.1.1 it's fine. It only seems to do this in Chrome. This is making me think I have a virus. I did some researching about something called road runner, but I can't find any related traces. I also ran a full virus scan using nod32 (eset) and nothing. Any suggestion or help would be greatly appreciated. The intermittent loss of total internet access is really annoying and I'm worried about why it's trying to append the rr.com domain in Chrome. I suspect I'm dealing with two different issues, but you never know. Also for the DNS I'm using Google's DNS servers, the famous 8.8.8.8 and 8.8.8.4

    Read the article

  • emacs, colors in term-mode

    - by valya
    Hello, I use Emacs and I run bash with M-x term command. There is a problem: colors in the *terminal* buffer aren't the same as in Gnome Terminal, and they are worse (do you need a screen shot?). How can I fix this? This is pretty annoying :-) Thank you! Linux Mint 9 Emacs 23.1.1 x86_64 __________________ /home/valentin/Work/buzzoola/buzzoola/test/vagrant [.../vagrant]$ echo $TERM eterm-color __________________ /home/valentin/Work/buzzoola/buzzoola/test/vagrant [.../vagrant]$ echo $LS_COLORS rs=0:di=01;34:ln=01;36:hl=44;37:pi=40;33:so=01;35:do=01;35:bd=40;33;01:cd=40;33;01:or=40;31 ;01:su=37;41:sg=30;43:ca=30;41:tw=30;42:ow=34;42:st=37;44:ex=01;32:*.tar=01;31:*.tgz=01;31: *.arj=01;31:*.taz=01;31:*.lzh=01;31:*.lzma=01;31:*.zip=01;31:*.z=01;31:*.Z=01;31:*.dz=01;31 :*.gz=01;31:*.bz2=01;31:*.bz=01;31:*.tbz2=01;31:*.tz=01;31:*.deb=01;31:*.rpm=01;31:*.jar=01 ;31:*.rar=01;31:*.ace=01;31:*.zoo=01;31:*.cpio=01;31:*.7z=01;31:*.rz=01;31:*.jpg=01;35:*.jp eg=01;35:*.gif=01;35:*.bmp=01;35:*.pbm=01;35:*.pgm=01;35:*.ppm=01;35:*.tga=01;35:*.xbm=01;3 5:*.xpm=01;35:*.tif=01;35:*.tiff=01;35:*.png=01;35:*.svg=01;35:*.svgz=01;35:*.mng=01;35:*.p cx=01;35:*.mov=01;35:*.mpg=01;35:*.mpeg=01;35:*.m2v=01;35:*.mkv=01;35:*.ogm=01;35:*.mp4=01; 35:*.m4v=01;35:*.mp4v=01;35:*.vob=01;35:*.qt=01;35:*.nuv=01;35:*.wmv=01;35:*.asf=01;35:*.rm =01;35:*.rmvb=01;35:*.flc=01;35:*.avi=01;35:*.fli=01;35:*.flv=01;35:*.gl=01;35:*.dl=01;35:* .xcf=01;35:*.xwd=01;35:*.yuv=01;35:*.axv=01;35:*.anx=01;35:*.ogv=01;35:*.ogx=01;35:*.aac=00 ;36:*.au=00;36:*.flac=00;36:*.mid=00;36:*.midi=00;36:*.mka=00;36:*.mp3=00;36:*.mpc=00;36:*. ogg=00;36:*.ra=00;36:*.wav=00;36:*.axa=00;36:*.oga=00;36:*.spx=00;36:*.xspf=00;36:

    Read the article

  • How to do 'search for keyword in files' in emacs in Windows without cygwin?

    - by Anthony Kong
    I want to search for keyword, says 'action', in a bunch of files in my Windows PC with Emacs. It is partly because I want to learn more advanced features of emacs. It is also because the Windows PC is locked down by company policy. I cannot install useful applications like cygwin at will. So I tried this command: M-x rgrep It throws the following error message: *- mode: grep; default-directory: "c:/Users/me/Desktop/Project" -*- Grep started at Wed Oct 16 18:37:43 find . -type d "(" -path "*/SCCS" -o -path "*/RCS" -o -path "*/CVS" -o -path "*/MCVS" -o -path "*/.svn" -o -path "*/.git" -o -path "*/.hg" -o -path "*/.bzr" -o -path "*/_MTN" -o -path "*/_darcs" -o -path "*/{arch}" ")" -prune -o "(" -name ".#*" -o -name "*.o" -o -name "*~" -o -name "*.bin" -o -name "*.bak" -o -name "*.obj" -o -name "*.map" -o -name "*.ico" -o -name "*.pif" -o -name "*.lnk" -o -name "*.a" -o -name "*.ln" -o -name "*.blg" -o -name "*.bbl" -o -name "*.dll" -o -name "*.drv" -o -name "*.vxd" -o -name "*.386" -o -name "*.elc" -o -name "*.lof" -o -name "*.glo" -o -name "*.idx" -o -name "*.lot" -o -name "*.fmt" -o -name "*.tfm" -o -name "*.class" -o -name "*.fas" -o -name "*.lib" -o -name "*.mem" -o -name "*.x86f" -o -name "*.sparcf" -o -name "*.dfsl" -o -name "*.pfsl" -o -name "*.d64fsl" -o -name "*.p64fsl" -o -name "*.lx64fsl" -o -name "*.lx32fsl" -o -name "*.dx64fsl" -o -name "*.dx32fsl" -o -name "*.fx64fsl" -o -name "*.fx32fsl" -o -name "*.sx64fsl" -o -name "*.sx32fsl" -o -name "*.wx64fsl" -o -name "*.wx32fsl" -o -name "*.fasl" -o -name "*.ufsl" -o -name "*.fsl" -o -name "*.dxl" -o -name "*.lo" -o -name "*.la" -o -name "*.gmo" -o -name "*.mo" -o -name "*.toc" -o -name "*.aux" -o -name "*.cp" -o -name "*.fn" -o -name "*.ky" -o -name "*.pg" -o -name "*.tp" -o -name "*.vr" -o -name "*.cps" -o -name "*.fns" -o -name "*.kys" -o -name "*.pgs" -o -name "*.tps" -o -name "*.vrs" -o -name "*.pyc" -o -name "*.pyo" ")" -prune -o -type f "(" -iname "*.sh" ")" -exec grep -i -n "action" {} NUL ";" FIND: Parameter format not correct Grep exited abnormally with code 2 at Wed Oct 16 18:37:44 I believe rgrep tried to spwan a process and called 'FIND' with all the parameters. However, since it is a Windows, the default Find executable simply does not know how to handle. What is the better way to search for a keyword in multiple files in Emacs on Windows platform, without any dependency on external programs? Emacs version: 24.2.1

    Read the article

  • mdadm cron job sends email that cron has run

    - by Andrew
    I've got an Ubuntu 8.04 server using mdadm to create several RAID1 arrays. I created /etc/cron.hourly/mdadm as follows: #! /bin/sh set -e mdadm --monitor /dev/md0 /dev/md3 /dev/md4 --oneshot (Yes, the array numbers are not sequential, and I'm not using --scan beacuse I have a degraded array that may or may not have been used as swap and I can't delete, but I think that's a separate issue. If it's the underlying cause of this, I need to fix it.) mdadm sends me email (configured in the /etc/mdadm/mdadm.conf) on DegradedArray etc. events. This is the desired behaviour. What is not desired, and I can't work out, is why cron is sending me (relatively pointless) emails, via an alias in /etc/aliases: From: root@<hostname> (Cron Daemon) To: root@<hostname> Subject: Cron <root@<hostname>> cd / && run-parts --report /etc/cron.hourly Content-Type: text/plain; charset=ANSI_X3.4-1968 X-Cron-Env: <SHELL=/bin/sh> X-Cron-Env: <PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin> X-Cron-Env: <HOME=/root> X-Cron-Env: <LOGNAME=root> Message-Id: <id@hostname> Date: Fri, 7 May 2010 13:17:01 +0930 (CST) /etc/cron.hourly/mdadm: mdadm: Monitor using email address "<root_alias@domain>" from config file I've got a dozen other servers behaving correctly (mdadm sends email, cron doesnt') with identical /etc/crontab files: # /etc/crontab: system-wide crontab # <snip comments> SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # m h dom mon dow user command 17 * * * * root cd / && run-parts --report /etc/cron.hourly <snip anacron jobs> Should I simply remove the --report, or is there something else in my cron config somewhere causing this?

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >