Search Results

Search found 24211 results on 969 pages for 'shell command'.

Page 619/969 | < Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >

  • Tripwire help Required

    - by ramaperumal
    I have created the policy file in Tripwire and also I have created the rules as well mentioned below: /opt/jboss/server/gis/conf -> $(SEC_CONFIG) +aipm +c+g+a+i+s+t+u+l+M; /usr/local/gtech/eseries/ -> $(SEC_CONFIG) +a+c+g+i+s+t+u+l+M ; After running the integrity check the output should be a(Access timestamp),c (Inode timestamp (create/modify),g (File owner's group ID),i (Inode number),s (File size),t (time stamp),u (File owner's user ID),l(File is increasing in size (a "growing file"),M (MD5 hash value). I am getting the output as below: [root@xxsi1242 tripwire]# tripwire --check Parsing policy file: /etc/tripwire/tw.pol *** Processing Unix File System *** Performing integrity check... Wrote report file: /var/lib/tripwire/report/xxsi1242.gtk.gtech.com-20131106-053812.twr Open Source Tripwire(R) 2.4.1 Integrity Check Report Report generated by: root Report created on: Wed 06 Nov 2013 05:38:12 AM EST Database last updated on: Wed 06 Nov 2013 05:31:17 AM EST =============================================================================== Report Summary: =============================================================================== Host name: xxsi1242.gtk.gtech.com Host IP address: 156.24.65.171 Host ID: None Policy file used: /etc/tripwire/tw.pol Configuration file used: /etc/tripwire/tw.cfg Database file used: /var/lib/tripwire/xxsi1242.gtk.gtech.com.twd Command line used: tripwire --check =============================================================================== Rule Summary: =============================================================================== ------------------------------------------------------------------------------- Section: Unix File System ------------------------------------------------------------------------------- Rule Name Severity Level Added Removed Modified --------- -------------- ----- ------- -------- Invariant Directories 66 0 0 0 Temporary directories 33 0 0 0 * Tripwire Data Files 100 0 0 1 Tech Stack 100 0 0 0 User binaries 66 0 0 0 Tripwire Binaries 100 0 0 0 * CLPS bins 100 0 0 2 CLPS Configuration files 100 0 0 0 ESCommon 100 0 0 0 Shell Binaries 100 0 0 0 OS executables and libraries 100 0 0 0 Security Control 100 0 0 0 ESCommon Configuration 100 0 0 0 (/etc/gtech/escommon) Total objects scanned: 12358 Total violations found: 3 =============================================================================== Object Summary: =============================================================================== ------------------------------------------------------------------------------- # Section: Unix File System ------------------------------------------------------------------------------- ------------------------------------------------------------------------------- Rule Name: Tripwire Data Files (/etc/tripwire/tw.pol) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/etc/tripwire/tw.pol" ------------------------------------------------------------------------------- Rule Name: CLPS bins (/opt/jboss/server) Severity Level: 100 ------------------------------------------------------------------------------- Modified: "/opt/jboss/server/esapps1/data/hypersonic/localDB.lck" "/opt/jboss/server/gis/data/hypersonic/localDB.lck" =============================================================================== Error Report: =============================================================================== No Errors ------------------------------------------------------------------------------- *** End of report *** Note: In the output I only am getting the files which are modified. I need the detail output for this. But unfortunately I am not getting what I expected. Please help me to proced further.

    Read the article

  • Can't re-mount existing RAID10 on Ubuntu

    - by Zoran
    I saw similar questions, but didn't find what solution to my problem. After power-cut, one of RAID10 (4 disks were) appears to be malfunctioning. I make tha array active one, but can not mount it. Always the same error: mount: you must specify the filesystem type So, here is what I have when type mdadm --detail /dev/md0 /dev/md0: Version : 00.90.03 Creation Time : Tue Sep 1 11:00:40 2009 Raid Level : raid10 Array Size : 1465148928 (1397.27 GiB 1500.31 GB) Used Dev Size : 732574464 (698.64 GiB 750.16 GB) Raid Devices : 4 Total Devices : 3 Preferred Minor : 0 Persistence : Superblock is persistent Update Time : Mon Jun 11 09:54:27 2012 State : clean, degraded Active Devices : 3 Working Devices : 3 Failed Devices : 0 Spare Devices : 0 Layout : near=2, far=1 Chunk Size : 64K UUID : 1a02e789:c34377a1:2e29483d:f114274d Events : 0.166 Number Major Minor RaidDevice State 0 8 16 0 active sync /dev/sdb 1 0 0 1 removed 2 8 48 2 active sync /dev/sdd 3 8 64 3 active sync /dev/sde At the /etc/mdadm/mdadm.conf I have by default, scan all partitions (/proc/partitions) for MD superblocks. alternatively, specify devices to scan, using wildcards if desired. DEVICE partitions auto-create devices with Debian standard permissions CREATE owner=root group=disk mode=0660 auto=yes automatically tag new arrays as belonging to the local system HOMEHOST <system> instruct the monitoring daemon where to send mail alerts MAILADDR root definitions of existing MD arrays ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d This file was auto-generated... So, my question is, how can I mount md0 array (md1 has been mounted without problem) in order to preserve existing data? One more thing, fdisk -l command gives the following result: Disk /dev/sdb: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/sdb1 * 1 88217 708603021 83 Linux /dev/sdb2 88218 91201 23968980 5 Extended /dev/sdb5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdc: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x0008f8ae Device Boot Start End Blocks Id System /dev/sdc1 1 88217 708603021 83 Linux /dev/sdc2 88218 91201 23968980 5 Extended /dev/sdc5 88218 91201 23968948+ 82 Linux swap / Solaris Disk /dev/sdd: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x4be1abdb Device Boot Start End Blocks Id System Disk /dev/sde: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xa4d5632e Device Boot Start End Blocks Id System Disk /dev/sdf: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/sdg: 750.1 GB, 750156374016 bytes 255 heads, 63 sectors/track, 91201 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Disk /dev/md1: 750.1 GB, 750156251136 bytes 2 heads, 4 sectors/track, 183143616 cylinders Units = cylinders of 8 * 512 = 4096 bytes Disk identifier: 0xdacb141c Device Boot Start End Blocks Id System Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: ignoring extra data in partition table 5 Warning: invalid flag 0x7b6e of partition table 5 will be corrected by w(rite) Disk /dev/md0: 1500.3 GB, 1500312502272 bytes 255 heads, 63 sectors/track, 182402 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Disk identifier: 0x660a6799 Device Boot Start End Blocks Id System /dev/md0p1 * 1 88217 708603021 83 Linux /dev/md0p2 88218 91201 23968980 5 Extended /dev/md0p5 ? 121767 155317 269488144 20 Unknown And one more thing. When using mdadm --examine command, here ise result: mdadm -v --examine --scan /dev/sdb /dev/sdc /dev/sdd /dev/sde /dev/sdf /dev/sd ARRAY /dev/md1 level=raid1 num-devices=2 UUID=9b592be7:c6a2052f:2e29483d:f114274d devices=/dev/sdf ARRAY /dev/md0 level=raid10 num-devices=4 UUID=1a02e789:c34377a1:2e29483d:f114274d devices=/dev/sdb,/dev/sdc,/dev/sdd,/dev/sde md0 has 3 devices which are active. Can someone instruct me how to solve this issue? If it is possible, I would like not to removing faulty HDD. Please advise

    Read the article

  • Managing disk in a VM

    - by dst
    I'm replacing my two old rack servers with a new one that has plenty of power to take over the functionality my current servers. The server is a 4U rack mount with 16 3.5" SAS drive bays, two 2.5" bays, a Xeon E3-1230v2 CPU and 32GB of ECC RAM. My issue is the following. I would like to have a FreeBSD file server with ZFS managing disks. However, I need other VMs for e.g. a shell/git server, mail server etc. I'm wondering how to deal with the following issues: I want ZFS to fully manage the disks, so I'm not using any hardware RAID. Should I pass the SAS controller directly to the FreeBSD system as passthrough PCI? I want to maximize the reliability of the setup. On what disks should I install the hypervsor and keep server system disks? For (2) I have the option of having a RAID setup on the SAS controller and using that as system disk to store the hypervisor as well as VM images. However, this makes PCI passthrough to the file server impossible. Another option is using the two 2.5" bays. In terms of reliability how are SSDs compared to e.g. WD RE4 disks? Would it make sense to have two SSDs in software RAID as boot disks for the hypervisor or should I just go with e.g. WD RE4 disks in a software RAID setup. I also need to think about where to store the mails for the mail server, but this could be done over NFS between the VMs. BTW, this is for home use, so the load is not really that big. What I'm looking for is best practices for splitting up a server.

    Read the article

  • Conditionally Rewrite Email Headers (From & Reply-To) Exchange 2010

    - by NorthVandea
    I have a client who maintains Company A (with email addresses %username%@companyA.com) and they own the domain companyB.com however there is no "infrastructure" (no Exchange server) set up specifically for companyB.com. My client needs to be able to have the end users within it's company (companyA.com) add a specific word or phrase to the Subject (or Body) line of the Outgoing email (they are only concerned with outgoing, incoming is a non-issue in this case) that triggers the Exchange 2010 servers to rewrite the header From and Reply-To [email protected] with [email protected] but this re-write should ONLY occur if the user places the key word/phrase in the Subject (or Body). I have attempted using Transport Rules and the New-AddressRewriteEntry cmdlet however each seems to have a limitation. From what I can tell Transport Rules cannot re-write the From/Reply-To fields and New-AddressRewriteEntry cannot be conditionally triggered based on message content. So to recap: User sends email outside the organization: From and Reply-To remain [email protected] User sends email outside the organization WITH "KeyWord" in the Subject or Body: From and Reply-To change to [email protected] automatically. Anyone know how this could be done WITHOUT coding a new Mail Agent? I don't have the programming knowledge to code a custom Agent... I can use any function of Exchange Management Shell or Console. Alternatively if anyone knows of a simple add-on program that could do this that would be good too. Any help would be greatly appreciated! Thank you!!!

    Read the article

  • Generating new SID for Windows 7 cloned partition in Linux?

    - by Jack
    So I've read that the proper way to clone a Windows 7 partition is to run a Sysprep after the clone is complete. For MANY reasons, this is not possible the way we are cloning these drives (long story short, the drive should be fully up and running after we clone it, with all the settings already there and requiring no user intervention; and no, not even an answer file would work because the way we customize all the Win7 settings is complex and we do not want the user touching the settings). I understand Microsoft will not support Windows 7 clones if it is not sysprepped and that is fine for us. Acronis recovery tools get around this by ticking an option called "Create new NT signature", which resets the SID and GUID on any restore. Symantec has a tool called Ghostwalker which does the same thing. However, we are looking for a way to do this in Linux because we want to use open source tools to do the imaging (fsarchiver, partclone, etc. basically the same tools Clonezilla uses internally to clone NTFS partitions). The question is, if we clone using these tools in Linux, how would we generate a new SID thereafter (without the use of sysprep)? Is there any way to do it within a Linux environment? The whole image process is automated so if it is a simple command that I can just throw in my shell script, that would be even better. Of course, it would be nice to know if this is even possible. Any ideas? EDIT: Forgot to mention that the target machines we are restoring the image on are EXACTLY the same.

    Read the article

  • Why is lighttpd and fastcgi keeping sending me the *.scgi file instead of the website content?

    - by e-satis
    I have the following config: server.modules = ( "mod_compress", "mod_access", "mod_alias", "mod_rewrite", "mod_redirect", "mod_secdownload", "mod_h264_streaming", "mod_flv_streaming", "mod_accesslog", "mod_auth", "mod_status", "mod_expire", "mod_fastcgi" ) [...] fastcgi.server = ( ".php" => (( "bin-path" => "/usr/bin/php-cgi", "socket" => "/var/tmp/lighttpd/php-fastcgi.socket" + var.PID, "max-procs" => 1, "kill-signal" => 9, "idle-timeout" => 10, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "200", "PHP_FCGI_MAX_REQUESTS" => "1000" ), "/pyapps/essai/blondes.fcgi" => ( "main" => ( "socket" => "/var/tmp/lighttpd/django-fastcgi.socket", ), ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" ))) [...] $HTTP["host"] =~ "(^|www\.)cam\.com(\:[0-9]*)?$" { server.document-root = "/home/cam/web/" accesslog.filename = "/home/cam/log/access.log" server.errorlog = "/home/cam/log/error.log" server.follow-symlink = "enable" # files to check for if .../ is requested server.indexfiles = ( "index.php", "index.html", "index.htm", "index.rb") url.rewrite = ( "^(/blondes/.*)$" => "/pyapps/essai/blondes.fcgi$1" ) } I have the following dir tree: /home/tv/web/ `-- pyapps `-- essai `-- __init__.py `-- blondes.fcgi `-- blondes.pid `-- django-fcgi.py `-- manage.py `-- manage.pyo `-- plop `-- settings.py `-- urls.py No error when restarting lighthttpd. The I run: ./manage.py runfcgi method=prefork socket=/var/tmp/lighttpd/django-fastcgi.socket daemonize=false pidfile=blondes.pid No errors neither. I then go to http://cam.com/blondes/. I offers me to download an empty file. I checked permissions but everything is set to the same user and group, and they work for the PHP site. The file /var/tmp/lighttpd/django-fastcgi.socket exists. When I reload the page, I got no output in error logs, nor in the manage.py runfcgi command. I probably missed something obvious, but what ?

    Read the article

  • Jobs with anacron won't run

    - by mareser
    I would like to run two bash scripts daily using anacron in order to backup some data. Unfortunately I can't figure out why said scripts are not executed. For test purposes I let cron execute the scripts and it worked fine. cat /etc/anacrontab gives # /etc/anacrontab: configuration file for anacron # See anacron(8) and anacrontab(5) for details. SHELL=/bin/sh PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin # These replace cron's entries 1 5 cron.daily nice run-parts --report /etc/cron.daily 7 10 cron.weekly nice run-parts --report /etc/cron.weekly @monthly 15 cron.monthly nice run-parts --report /etc/cron.monthly 1 5 TB_bak /bin/sh /home/vasco2/Dropbox/Scripts/backup_TB.sh 1 5 key_db_bak /bin/sh /home/vasco2/Dropbox/Scripts/bak_key_db.sh The output of ls ~/Dropbox/Scripts/ is backup_TB.sh bak_key_db.sh I use Linux Mint Katya. uname -a gives Linux vasco2 2.6.38-8-generic-pae #42-Ubuntu SMP Mon Apr 11 05:17:09 UTC 2011 i686 i686 i386 GNU/Linux I would be very happy if somebody could point me in the right direction on why those scripts won't get executed. P.S.: There is no anacron tag on superuser.com. Maybe somebody wants to change that.

    Read the article

  • Bugzilla : No SASL mechanism found

    - by niteshsinha
    I am using Bugzilla on windows 7. I am using the unofficial Bugzilla installer. I followed the steps accordingly and gave valid credentials wherever required. I open Bugzilla and try to create a new account , but i get the following error. Software error: No SASL mechanism found at C:/Program Files/Bugzilla/perl/perl/site/lib/Authen/SASL.pm line 77 at C:/Program Files/Bugzilla/perl/perl/lib/Net/SMTP.pm line 143 i ran checksetup.pl and found that Authen::SASL and SMTP both are available on my machine. The output of checksetup.pl is as follows. * This is Bugzilla 3.6.3 on perl 5.10.1 * Running on Win7 Build 7600 Checking perl modules... Checking for CGI.pm (v3.33) ok: found v3.49 Checking for Digest-SHA (any) ok: found v5.48 Checking for TimeDate (v2.21) ok: found v2.24 Checking for DateTime (v0.28) ok: found v0.53 Checking for DateTime-TimeZone (v0.79) ok: found v1.10 Checking for DBI (v1.41) ok: found v1.609 Checking for Template-Toolkit (v2.22) ok: found v2.22 Checking for Email-Send (v2.16) ok: found v2.198 Checking for Email-MIME (v1.861) ok: found v1.903 Checking for Email-MIME-Encodings (v1.313) ok: found v1.313 Checking for Email-MIME-Modifier (v1.442) ok: found v1.903 Checking for URI (any) ok: found v1.52 Checking available perl DBD modules... Checking for DBD-Pg (v1.45) ok: found v2.16.1 Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for DBD-Oracle (v1.19) not found The following Perl modules are optional: Checking for GD (v1.20) ok: found v2.44 Checking for Chart (v2.1) ok: found v2.4.1 Checking for Template-GD (any) ok: found v1.56 Checking for GDTextUtil (any) ok: found v0.86 Checking for GDGraph (any) ok: found v1.44 Checking for XML-Twig (any) ok: found v3.34 Checking for MIME-tools (v5.406) ok: found v5.427 Checking for libwww-perl (any) ok: found v5.834 Checking for PatchReader (v0.9.4) ok: found v0.9.5 Checking for perl-ldap (any) ok: found v0.39 Checking for Authen-SASL (any) ok: found v2.15 Checking for RadiusPerl (any) ok: found v0.17 Checking for SOAP-Lite (v0.710.06) ok: found v0.710.10 Checking for JSON-RPC (any) ok: found v0.95 Checking for Test-Taint (any) ok: found v1.04 Checking for HTML-Parser (v3.40) ok: found v3.64 Checking for HTML-Scrubber (any) ok: found v0.08 Checking for Email-MIME-Attachment-Stripper (any) ok: found v1.316 Checking for Email-Reply (any) ok: found v1.202 Checking for TheSchwartz (any) not found Checking for Daemon-Generic (any) not found Checking for mod_perl (v1.999022) not found *********************************************************************** * OPTIONAL MODULES * *********************************************************************** * Certain Perl modules are not required by Bugzilla, but by * * installing the latest version you gain access to additional * * features. * * * * The optional modules you do not have installed are listed below, * * with the name of the feature they enable. Below that table are the * * commands to install each module. * *********************************************************************** * MODULE NAME * ENABLES FEATURE(S) * *********************************************************************** * TheSchwartz * Mail Queueing * * Daemon-Generic * Mail Queueing * * mod_perl * mod_perl * *********************************************************************** * Note For Windows Users * *********************************************************************** * In order to install the modules listed below, you first have to run * * the following command as an Administrator: * * * * ppm repo add theory58S http://cpan.uwinnipeg.ca/PPMPackages/10xx/ * * * Then you have to do (also as an Administrator): * * * * ppm repo up theory58S * * * * Do that last command over and over until you see "theory58S" at the * * top of the displayed list. * *********************************************************************** COMMANDS TO INSTALL OPTIONAL MODULES: TheSchwartz: ppm install TheSchwartz Daemon-Generic: ppm install Daemon-Generic mod_perl: ppm install mod_perl Reading ./localconfig... Checking for DBD-mysql (v4.00) ok: found v4.012 Checking for MySQL (v4.1.2) ok: found v5.1.44-community-log Removing existing compiled templates... Precompiling templates...done. Now that you have installed Bugzilla, you should visit the 'Parameters' page (linked in the footer of the Administrator account) to ensure it is set up as you wish - this includes setting the 'urlbase' option to the correct URL. Press any key to continue . . . Please tell me what should i do. Please note: i am running behind a corporate proxy , SSL/TLS is not used internally but i am giving the smtpUser and smtpPass also.

    Read the article

  • Security implications of adding www-data to /etc/sudoers to run php-cgi as a different user

    - by BMiner
    What I really want to do is allow the 'www-data' user to have the ability to launch php-cgi as another user. I just want to make sure that I fully understand the security implications. The server should support a shared hosting environment where various (possibly untrusted) users have chroot'ed FTP access to the server to store their HTML and PHP files. Then, since PHP scripts can be malicious and read/write others' files, I'd like to ensure that each users' PHP scripts run with the same user permissions for that user (instead of running as www-data). Long story short, I have added the following line to my /etc/sudoers file, and I wanted to run it past the community as a sanity check: www-data ALL = (%www-data) NOPASSWD: /usr/bin/php-cgi This line should only allow www-data to run a command like this (without a password prompt): sudo -u some_user /usr/bin/php-cgi ...where some_user is a user in the group www-data. What are the security implications of this? This should then allow me to modify my Lighttpd configuration like this: fastcgi.server += ( ".php" => (( "bin-path" => "sudo -u some_user /usr/bin/php-cgi", "socket" => "/tmp/php.socket", "max-procs" => 1, "bin-environment" => ( "PHP_FCGI_CHILDREN" => "4", "PHP_FCGI_MAX_REQUESTS" => "10000" ), "bin-copy-environment" => ( "PATH", "SHELL", "USER" ), "broken-scriptfilename" => "enable" )) ) ...allowing me to spawn new FastCGI server instances for each user.

    Read the article

  • .tex file remains in use by process when batch file is triggered by .Rnw Sweave processing.

    - by drknexus
    This is a pretty specialized question. I'm using the Eclipse IDE in a Windows XP environment with the StatET plug-in so I can write R code as an R/Sweave document. This produces a .tex file that is then post processed by pdflatex.exe. When I create the file as normal everything works great (except maybe my file named russfnc2.Rnw seems to result in russfnc.pdf even though pdflatex.exe on the console window correctly says that the output is being writen to russfnc2.pdf). The big problem is when I trigger a batch file from within my Rnw code. My goal here is to spawn a side process that waits for the PDF to be made and uploads it to the server. So the Rnw contains: if(file.exists("rsp.finalize.bat")) {system("rsp.finalize.bat",wait=FALSE,invisible=FALSE)} The batch file calls Rterm.exe to run a script: setwd("C:/theprojectdirectory") while(!file.exists("russfnc.pdf")) { Sys.sleep(1) } Sys.sleep(60) At the end of that script, I use a shell call to launch psftp.exe and upload the files. All of this works fine, when I use my Eclipse profile to trigger Sweave... that is unless I have that batch file at the end of the .Rnw. When it is located there, I get the error message pdflatex.exe: Permission denied: c:\thepath\thetexfile.tex. After that, the .tex file (as far as XP is concerned) is in use by another process and I have to reboot in order to delete it (and, of course, the pdf is not made). If I manually trigger the batch file after pdflatex.exe has done its things, everything works fine. How can I make this work correctly using the tools I'm familiar with vis., R and Dos-style batch files? I'm not sure if this is a SuperUser question or a StackOverflow question, so I'm starting here.

    Read the article

  • rsnapshot - not correctly archiving mysql databases

    - by Tiffany Walker
    My rsnapshot configuration: snapshot_root /.snapshots/ backup /home/user localhost/ backup_script /usr/local/backup_mysql.sh localhost/mysql/ Using this file: NOW=$(date +"%m-%d-%Y") # mm-dd-yyyy format FILE="" # used in a loop ### Server Setup ### #* MySQL login user name *# MUSER="root" #* MySQL login PASSWORD name *# MPASS="YOUR-PASSWORD" #* MySQL login HOST name *# MHOST="127.0.0.1" #* MySQL binaries *# MYSQL="$(which mysql)" MYSQLDUMP="$(which mysqldump)" GZIP="$(which gzip)" # get all database listing DBS="$($MYSQL -u $MUSER -h $MHOST -p$MPASS -Bse 'show databases')" # start to dump database one by one for db in $DBS do FILE=$BAK/mysql-$db.$NOW-$(date +"%T").gz # gzip compression for each backup file $MYSQLDUMP --single-transaction -u $MUSER -h $MHOST -p$MPASS $db | $GZIP -9 > $FILE done It dumps the databases under / I then tried with the following: http://bash.cyberciti.biz/backup/rsnapshot-remote-mysql-backup-shell-script/ I got: rsnapshot hourly ---------------------------------------------------------------------------- rsnapshot encountered an error! The program was invoked with these options: /usr/bin/rsnapshot hourly ---------------------------------------------------------------------------- ERROR: backup_script /usr/local/backup_mysql.sh returned 1 WARNING: Rolling back "localhost/mysql/" ls -la /.snapshots/hourly.0/localhost/mysql total 8 drwxr-xr-x 2 root root 4096 Nov 23 17:43 ./ drwxr-xr-x 4 root root 4096 Nov 23 18:20 ../ What exactly am I doing wrong? EDIT: # /usr/local/backup_mysql.sh *** Dumping MySQL Database *** Database> information_schema..cphulkd..eximstats..horde..leechprotect..logaholicDB_ns1..modsec..mysql..performance_schema..roundcube..test.. *** Backup done [ files wrote to /.snapshots/tmp/mysql] *** root@ns1 [~]# ls -la /.snapshots/tmp/mysql total 8040 drwxr-xr-x 2 root root 4096 Nov 23 18:41 ./ drwxr-xr-x 3 root root 4096 Nov 23 18:41 ../ -rw-r--r-- 1 root root 1409 Nov 23 18:41 cphulkd.18_41_45pm.gz -rw-r--r-- 1 root root 113522 Nov 23 18:41 eximstats.18_41_45pm.gz -rw-r--r-- 1 root root 4583 Nov 23 18:41 horde.18_41_45pm.gz -rw-r--r-- 1 root root 71757 Nov 23 18:41 information_schema.18_41_45pm.gz -rw-r--r-- 1 root root 692 Nov 23 18:41 leechprotect.18_41_45pm.gz -rw-r--r-- 1 root root 2603 Nov 23 18:41 logaholicDB_ns1.18_41_45pm.gz -rw-r--r-- 1 root root 745 Nov 23 18:41 modsec.18_41_45pm.gz -rw-r--r-- 1 root root 138928 Nov 23 18:41 mysql.18_41_45pm.gz -rw-r--r-- 1 root root 1831 Nov 23 18:41 performance_schema.18_41_45pm.gz -rw-r--r-- 1 root root 3610 Nov 23 18:41 roundcube.18_41_45pm.gz -rw-r--r-- 1 root root 436 Nov 23 18:41 test.18_41_47pm.gz MySQL Backup seems fine.

    Read the article

  • Is it possible to download extremely large files intelligently or in parts via SSH from Linux to Windows?

    - by Andrew
    I have a ~35 GB file on a remote Linux Ubuntu server. Locally, I am running Windows XP, so I am connecting to the remote Linux server using SSH (specifically, I am using a Windows program called SSH Secure Shell Client version 3.3.2). Although my broadband internet connection is quite good, my download of the large file often fails with a Connection Lost error message. I am not sure, but I think that it fails because perhaps my internet connection goes out for a second or two every several hours. Since the file is so large, downloading it may take 4.5 to 5 hours, and perhaps the internet connection goes out for a second or two during that long time. I think this because I have successfully downloaded files of this size using the same internet connection and the same SSH software on the same computer. In other words, sometimes I get lucky and the download finishes before the internet connection drops for a second. Is there any way that I can download the file in an intelligent way -- whereby the operating system or software "knows" where it left off and can resume from the last point if a break in the internet connection occurs? Perhaps it is possible to download the file in sections? Although I do not know if I can conveniently split my file into multiple files -- I think this would be very difficult, since the file is binary and is not human-readable. As it is now, if the entire ~35 GB file download doesn't finish before the break in the connection, then I have to start the download over and overwrite the ~5-20 GB chunk that was downloaded locally so far. Do you have any advice? Thanks.

    Read the article

  • Configuring dnsmasq to handle mx records on pfsense 2.0.1

    - by Bob B.
    I know from dnsmasq's man page that it is capable of handling mx records, but I can't seem to find anything in pfsense's web GUI or anywhere online that talks about how to include mx records. I'm running pfsense 2.0.1 on a turnkey hardware appliance. I have root shell access. I would prefer not to move away from using DNS Forwarder/dnsmasq if I can help it. I've searched for a dnsmasq.conf file, but none exists. pfsense handles everything through a centralized xml config file. That file merely designates the dnsmasq section using the tag, then drops immediate into listings for each host override you define. My understanding of pfsense's implementation: In the GUI, you can only define an override using the host, domain, IP and description. In the XML that translates to: <hosts> <host>foo</host> <domain>foo.com</domain> <ip>127.0.0.1</ip> <descr/> </hosts> The above example results in foo.foo.com resolving to 127.0.0.1, for instance. But that's it. No ability to select a record type with which to define things like MX. Anyone had any luck with this? Thank you for any insights you might have.

    Read the article

  • Installation of Access Database Engine 32-bit Fails

    - by Rayzor78
    I am trying to install Access Database Engine 2007 32-bit. The splash screen comes up, you click "Next", then it fails with the error: Installation ended prematurely because of an error You click "OK" and another error window says: The installation of the package failed. The exact same situation happens when I try this with Access Database Engine 2010 32-bit. This production server is running Windows Server 2008 R2 SP1 64-bit. Before I tried installing Access Database Engine 32-bit, I first needed to install Microsoft Office 2010 Pro (Excel and Office Tools only). I tried the 32-bit version on the production server since that is how I set it up in our Dev environment. No luck. The 32-bit version would not install. I did NOT get the error "You have 64-bit components of Office installed". I simply received the exact same two errors listed above. So, I knew that 32-bit/64-bit did not really matter for the Office install for my project, so I installed 64-bit of Office Pro 2010 (Excel and Office Tools only) with no problems. I have a requirement that I need to have the 32-bit version of the Access Database Engine installed. 2007 or 2010, doesn't matter. I cannot use the 64-bit version of Access Database Engine 2010 because my SSIS package will not work with it. I require the 32-bit version. I've tried several steps to try to get it installed. I seriously think that the production server has some aversion to installing 32-bit applications. Here's what I've tried: Tried installing via command line with the "/passive" switch....no luck. Tried numerous iterations to copy the install file to the server (downloaded a fresh copy directly to the server, downloaded a fresh copy to my local machine then copied it over, copied it over zipped up) (http://social.msdn.microsoft.com/Forums/en-US/sqldataaccess/thread/efd3c1f0-07cd-45ca-a626-2dd0c7ac3e9f). Tried Method 1 from this link. Could not try Method 2 because it requires a server reboot and in my environment that requires a long change management process. I've verified that I am a local administrator on the server. (Evidence, I am able to install other applications (office 64-bit per above)). Verified that there are no other office products that should be blocking the installation. The fore-mentioned install of Excel 2010 64-bit was the first Office product installed on the server. VERY ODD: To test my theory that the production server does not like 32-bit applications, I installed something lightweight. I installed 7-Zip 32-bit on the production server with no problems whatsoever. Here are some things that I have not tried (i will follow-up once I do): Method 2 (as mentioned above). Requires a server reboot. Have not verified that the Dev and Production environments are 100% identical. I've done a cursory check and on the surface they appear to be the same (same OS and SP version). I need to do a deeper dive to be 100% certain. I had no problems in my Dev environment. In Dev, I installed Office 2010 Pro 64-bit (Excel & Office Tools only) then via command line w/ the "/passive" switch, installed Access Database Engine 2010 32-bit. I don't know what else to try. Any suggestions or comments?

    Read the article

  • Mongodb Slave replication lag

    - by Leonid Bugaev
    We using standard mongo setup: 2 replicas + 1 arbiter. Both replica servers use same AWS m1.medium with RAID10 EBS. We experiencing constantly growing replication lag on secondary replica. I tried to do full-resync, you can see it on graph, but it helped only for some hours. Our mongo usage is really low now, and frankly i can't understan why it can be. iostat 1 for secondary: avg-cpu: %user %nice %system %iowait %steal %idle 80.39 0.00 2.94 0.00 16.67 0.00 Device: tps kB_read/s kB_wrtn/s kB_read kB_wrtn xvdap1 0.00 0.00 0.00 0 0 xvdb 0.00 0.00 0.00 0 0 xvdfp4 12.75 0.00 189.22 0 193 xvdfp3 12.75 0.00 189.22 0 193 xvdfp2 7.84 0.00 40.20 0 41 xvdfp1 7.84 0.00 40.20 0 41 md127 19.61 0.00 219.61 0 224 mongostat for secondary (why 100% locks? i guess its the problem): insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time *10 *0 *16 *0 0 2|4 0 30.9g 62.4g 1.65g 0 107 0 0|0 0|0 198b 1k 16 replset-01 SEC 06:55:37 *4 *0 *8 *0 0 12|0 0 30.9g 62.4g 1.65g 0 91.7 0 0|0 0|0 837b 5k 16 replset-01 SEC 06:55:38 *4 *0 *7 *0 0 3|0 0 30.9g 62.4g 1.64g 0 110 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:39 *4 *0 *8 *0 0 1|0 0 30.9g 62.4g 1.64g 0 82.9 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:40 *3 *0 *7 *0 0 5|0 0 30.9g 62.4g 1.6g 0 75.2 0 0|0 0|0 466b 2k 16 replset-01 SEC 06:55:41 *4 *0 *7 *0 0 1|0 0 30.9g 62.4g 1.64g 0 138 0 0|0 0|1 62b 1k 16 replset-01 SEC 06:55:42 *7 *0 *15 *0 0 3|0 0 30.9g 62.4g 1.64g 0 95.4 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:43 *7 *0 *14 *0 0 1|0 0 30.9g 62.4g 1.64g 0 98 0 0|0 0|0 62b 1k 16 replset-01 SEC 06:55:44 *8 *0 *17 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.3 0 0|0 0|0 342b 1k 16 replset-01 SEC 06:55:45 *7 *0 *14 *0 0 3|0 0 30.9g 62.4g 1.64g 0 96.1 0 0|0 0|0 186b 2k 16 replset-01 SEC 06:55:46 mongostat for primary insert query update delete getmore command flushes mapped vsize res faults locked % idx miss % qr|qw ar|aw netIn netOut conn set repl time 12 30 20 0 0 3 0 30.9g 62.6g 641m 0 0.9 0 0|0 0|0 212k 619k 48 replset-01 M 06:56:41 5 17 10 0 0 2 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 159k 429k 48 replset-01 M 06:56:42 9 22 16 0 0 3 0 30.9g 62.6g 642m 0 0.7 0 0|0 0|0 158k 276k 48 replset-01 M 06:56:43 6 18 12 0 0 2 0 30.9g 62.6g 640m 0 0.7 0 0|0 0|0 93k 231k 48 replset-01 M 06:56:44 6 12 8 0 0 3 0 30.9g 62.6g 640m 0 0.3 0 0|0 0|0 80k 125k 48 replset-01 M 06:56:45 8 21 14 0 0 9 0 30.9g 62.6g 641m 0 0.6 0 0|0 0|0 118k 419k 48 replset-01 M 06:56:46 10 34 20 0 0 6 0 30.9g 62.6g 640m 0 1.3 0 0|0 0|0 164k 527k 48 replset-01 M 06:56:47 6 21 13 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 111k 477k 48 replset-01 M 06:56:48 8 21 15 0 0 2 0 30.9g 62.6g 641m 0 0.7 0 0|0 0|0 204k 336k 48 replset-01 M 06:56:49 4 12 8 0 0 8 0 30.9g 62.6g 641m 0 0.5 0 0|0 0|0 156k 530k 48 replset-01 M 06:56:50 Mongo version: 2.0.6

    Read the article

  • Sudoers file allow sudo on specific file for active directory group

    - by tubaguy50035
    I have active directory sign in working on an Ubuntu 12.04 box. When the user signs in, I have a script that runs that needs sudo permission (since it modifies the samba config file). How would I specify this in my sudoer's file? I've tried: %DOMAIN\\AD+Programmers ALL=NOPASSWD: /usr/local/bin/createSambaShare.php I've found various resources on the internet stating that this is how it would be done, but I'm not sure that I have the first part right. What are they using as the DOMAIN? The workgroup or the realm? I use Samba + winbind for active directory integration. Here's my smb.conf: [global] security = ads netbios name = hostname realm = COMPANYNAME.COM password server = passwordserver workgroup = COMPANYNAME idmap uid = 1000-10000 idmap gid = 1000-10000 winbind separator = + winbind enum users = no winbind enum groups = no winbind use default domain = yes template homedir = /home/%D/%U template shell = /bin/bash client use spnego = yes domain master = no EDIT: The users that should have access to run that script are all part of the Programmers group which has an Active Directory Domain Services Folder of Company.com/Staff/Security Groups (not sure if that matters or not).

    Read the article

  • How do I set up a shared directory on Linux?

    - by JR Lawhorne
    I have a linux server I want to use to share files between users in my company. Users will access the machine with sftp or secure shell. Here is what I have: cd /home ls -l drwxrwsr-x 5 userA staff 4096 Jul 22 15:00 shared (other listings omitted) I want all users in the staff group to be able to create, modify, delete any file and/or directory in the shared folder. I don't want anyone else to have access to the folder at all. I have: Added the users to the staff group by modifying /etc/group and running grpconv to update /etc/gshadow Run chown -R userA.staff /home/shared Run chmod -R 2775 /home/shared Now, users in the staff group can create new files but they aren't allowed to open the existing files in the directory for edit. I suspect this is due to the primary group id associated with each user which is still set to be the group created when the user was created. So, the PGID of user 'userA' is 'userA'. I'd rather not change the primary group of the users to 'staff' if I can help it but if it is the easiest option, I would consider it. And, a variation on a theme, I'd like to do this same thing with another directory but also allow the apache user to read files in the directory and serve them. What's the best way to set this up?

    Read the article

  • Where is the bare cygwin package list located and how do I manipulate it?

    - by matnagel
    Where is the bare cygwin package list located and how do I manipulate it programmatically or from a shell or with a different method than the gui? I know the gui (setup.exe), and I'd love to go one or more levels deeper. I can retrieve a list of selected/installed packages ( http://serverfault.com/questions/83456/cygwin-package-management ), but how do I write it back or to a different machine? What I have in mind is when I install a new windows I would like to start with my package list in text form, an apply or inject it somehow to the new system. Where is it? In the registry? In a binary file? in a local database? Or has anybody done this, is there a tool, a tutorial? The essence of what I want is to manipulate the selected package list with something else than the gui. It is ok for me to use the gui for the setup process. So I could imagein manipulating the package List and then run setup.exe and just click through it. Note: I do not want to manipulate the list of already installed packages but of packages that "should be installed". But if htis is not possible, maybe there is some workaround. E.g add an outdated version as installed and the installer will then install the new version.

    Read the article

  • Notebook Operating System with extreme support cycles/security updates

    - by leto
    Hello there, after reading the announcements about Mac OS X "Lion" and Apples political decision, I've had enough. I'm a longtime Apple User since 1992, have always felt at home there, but am trying to switch to alternative Operating System since a year. I've also been working with Unix machines since 2001, so I'm looking in one of the free Unices or a Linux. Since I last looked at the desktop in 2002 choke much has changed, it seems. So I'm lost once more in the war between desktop environments and software. To be honest: I don't care what it's name is, I want to get my job done. Here's what I set me as landmark for an operating system/software to be considered: Has to be atleast four years old Has to supply security updates for current release for atleast a year Production quality stability for the whole desktop environment (!) No f****g commercial stuff that tends to supply me with privacy invading App Store or Cloud space So far I'm running a MacBook from 2007, 4 Gig memory, 250 Gig disk and I need: IMAPs for Mail since 1995 Webbrowser sic Shell Keeping current with Updates/Upgrades with no more than 5 Minutes spent in entering commands (makes it hard for OpenBSD ;-) ) A desktop filemanger would be nice, but is a bonus. What can you suggest as operating system? The one with the longest support cycles and best chance to survive the next 10 years will win a new user, even sending patches when needed :-) Greets

    Read the article

  • First web server questions

    - by Graeme
    Hi there, Just looking for some help/suggestions with this. I require my own server for an upcoming project that will be hosting users websites. I want to build a control panel the user can log into and modify their website which will be stored elsewhere on the server. This all seems easy enough, It's just managing domains and emails that confuse me. What should I look for to manage domain names and point them to the correct website and also what would be the best way to manage email accounts/set up new ones etc. I want to avoid cPanel/WHM if possible, I'm looking to control most things through the control panel I will be building. So any suggestions on this would be useful as well, as I will be wanting to add email accounts through php (Can be done using a shell I assume?). I will also be wanting to measure bandwidth used on the websites contained in each users directory, any suggestions on making this possible? I'm really looking for some suggestions on what software to use to set this up, any advice would be really helpful! Thanks, Graeme

    Read the article

  • samba joined to AD canot see users when in the security tab on client

    - by Jonathan
    I've got samba joined via kerberos and winbindd to our AD network and user authentication and everything else is working great. However when I try to add users/groups to file permissions it tells me they are not found. All the users groups show up fine with getent so I'm not sure why they are not showing up. Here is my smb.conf and I would much appreciate any help with this. #GLOBAL PARAMETERS [global] socket options = TCP_NODELAY IPTOS_LOWDELAY SO_KEEPALIVE SO_RCVBUF=11264 SO_SNDBUF=11264 workgroup = [hidden] realm = [hidden] preferred master = no server string = xerxes web/file server security = ADS encrypt passwords = yes log level = 3 log file = /var/log/samba/%m max log size = 50 printcap name = cups printing = cups winbind enum users = Yes winbind enum groups = Yes winbind use default domain = Yes winbind nested groups = Yes winbind separator = + winbind refresh tickets = yes idmap uid = 1600-20000 idmap gid = 1600-20000 template primary group = "Domain Users" template shell = /bin/bash kerberos method = system keytab nt acl support = yes [homes] comment = Home Direcotries valid users = %S read only = No browseable = No create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [test] comment = Test path=/mnt/test writeable=yes valid users = %s create mask = 0770 directory mask = 0770 force create mode = 0660 force directory mode = 2770 inherit owner = no [printers] comment = All Printers path = /var/spool/cups browseable = no printable = yes

    Read the article

  • Issues with VSFTPD / FTP on Linux Ubuntu server - Steps for Troubleshooting?

    - by jnolte
    I am dealing with an issue I am unclear on how to resolve and have been pulling my hair out for some time. I have been trying to configure an FTP user using the following (we use this same documentation on all servers) Install FTP Server apt-get install vsftpd Enable local_enable and write_enable to YES and anonymous user to NO in /etc/vsftpd.conf restart - service vsftpd restart - to allow changes to take place Add WordPress User for FTP access in WP Admin Create a fake shell for the user add "usr/sbin/nologin" to the bottom of the /etc/shells file Add a FTP user account useradd username -d /var/www/ -s /usr/sbin/nologin passwd username add these lines to the bottom of /etc/vsftpd.conf - userlist_file=/etc/vsftpd.userlist - userlist_enable=YES - userlist_deny=NO Add username to the list at top of /etc/vsftpd.userlist restart vsftpd "service vsftpd restart" make sure firewall is open for ftp "ufw allow ftp" allow modify the /var/www directory for username "chown -R /var/www I have also went through everything listed on this post and no luck. I am getting connection refused. Sorry for the poor text formatting above. I think you get the idea. This is something we do over and over and for some reason it is not cooperating here. Setup is Ubuntu 12.04LTS and VSFTPD v2.3.5 Thank you in advance.

    Read the article

  • How do I find my missing songs from last.fm?

    - by duality_
    My disk failed with all my music with it, lots of them. But luckily, I scrobbled every song to last.fm. I am looking for a way to scan my disk for my songs and check last.fm and tell me which songs are missing from my disk that are present on last.fm. So to recap: I would need to log into my last.fm account and compile a list of all the songs I have scrobbled and then scan my computer for missing songs. Is there a program or script that does this? I don't mind it being a shell script even. Edit: I know this is possible because I came close to this a little time ago. I created a PHP script (web page) that got all my songs through Last.fm API and then went through my files on disk and read their id3 tag. I got very close: the program showed missing songs, but there were many small issues (id3 reading was buggy, tags had different data, etc.) that required more programming time that I didn't have.

    Read the article

  • Change Windows 7 Explorer's Details Pane limits

    - by Paul
    For some reason, MS decided to completely kill the status bar's functionality in Win7 (and maybe Vista, but I don't know for sure). I have tried all possible options such as Classic Shell and so on. Basically, the one thing I miss most is seeing at a glance the total size of my selected files. I know I can press Alt+Enter or whatever, but that's not the point. The point is that the so-called 'details' pane stops providing details if more than 15 files are selected! WTH? Cannot understand the reason behind such a stupid arbitrary limit, that doesn't seem to be user-configurable at all. Anyway, what I'm looking for is a way to change that limit, either via the registry or otherwise. If changing the limit isn't possible, would it be possible for some programming genius to create a very small optimized light-weight non resource-hungry (you get the idea!) program whose sole purpose will be to automatically click the "Show more details..." link in the details pane after every file selection? For bonus points, the program should wait for a second or two after file selection is complete to do this, so that selecting multiple files in quick succession doesn't hit the HDD repeatedly. Any takers for the challenge? :)

    Read the article

  • How do I identify Blackberry / OWA users in my IIS logs?

    - by Quinten
    We just rolled out a Blackberry Express Server, and would like to make sure that all Blackberry devices that our users own are connecting SOLELY through the BES server. We are running Exchange 2010 SP1. I've read some links that discuss blocking BIS at the firewall level. Before doing that, however, I'd like to individually contact all users with Blackberries and make sure that they have a chance to switch to the BES server. I've sent a company-wide email, but unsurprisingly folks tend to tune these out until they are forced into action. Is there an easy way to identify the users with Blackberries by searching IIS logs, or perhaps using the Exchange Management Shell? Especially some automated way? I've tried searching for the Blackberry identifier, but it does not appear next to any user name, so it's not as helpful as it could be. Edit: to clarify, what I'm talking about is the fact that Blackberries can use OWA to download mail to the phone. We do not allow IMAP or POP access through our firewall so that's not a concern--just folks with Blackberries using Blackberry's hack to allow it to connect to Exchange without a BES server. As far as I know, Blackberries are the only popular phones that use this method to download mail.

    Read the article

< Previous Page | 615 616 617 618 619 620 621 622 623 624 625 626  | Next Page >