Search Results

Search found 8646 results on 346 pages for 'echo flow'.

Page 274/346 | < Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >

  • Robocopy silently missing files

    - by John Hunt
    I'm using Robocopy to sync data from our server's hard disk to an external disk as a backup. It's a pretty simple solution but pretty much the best/easiest one we could come up with - we use two external disks and rotate them offsite. Anyway, here's the script (with the comments taken out) that I'm using to do it. It works very well, it's quick and almost 100% complete - however it's acting pretty strange with a few files (note company name has been changed in paths to protect the innocent): @ECHO OFF set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% SET prefix="E:\backup_log-" SET source_dir="M:\Company Names Data\Working Folder\_ADMIN_BACKUP_FILES\COMPA AANY Business Folder_Backup_040407\COMPANY_sales order register\BACKUP CLIENT FOLDERS & CURRENT JOBS pre 270404\CLIENT SALES ORDER REGISTER" SET dest_dir="E:\dest" SET log_fname=%prefix%%date:~-4,4%%date:~-10,2%%date:~-7,2%.log SET what_to_copy=/COPY:DAT /MIR SET options=/R:0 /W:0 /LOG+:%log_fname% /NFL /NDL ROBOCOPY %source_dir% %dest_dir% %what_to_copy% %options% set DATESTAMP=%DATE:~10,4%/%DATE:~4,2%/%DATE:~7,2% %TIME:~0,2%:%TIME:~3,2%:%TIME:~6,2% cscript msg.vbs "Backup completed at %DATESTAMP% - Logs can be found on the E: drive." :END Normally the source would just be M:\Comapany name data\ but I altered the script a bit to test the problem. The following files in the source are not copied to the dest: Someclient\SONICP~1.DOC Someclient\SONICP~2.DOC Someclient\SONICP~3.DOC However, files in the same directory named: TIMESH~1.XLS TIMESH~2.XLS are copied. I'm able to open the files that aren't copied with no trouble at all, and they certainly weren't opened when I ran robocopy so it's not a locking issue. Robocopy is running as administrator so it's not a permissions issue. There's no trace these files were even attempted to be copied as there are no errors being output in the log or in my command prompt. Does anyone have any suggestions as to what this might be? Busted hard disk? Cheers, John.

    Read the article

  • Symlinks are inaccessible by their full path on OS X

    - by Computer Guru
    Hi, I have symlinks pointing to applications placed in /usr/local/bin which is in the path. However, I can't run these applications from other folders. Even more weird, I can't access them by the full path to the symlink. [mqudsi@iqudsi:Desktop/EasyBCD]$ echo $path (03-26 13:42) /opt/local/bin /opt/local/sbin /usr/local/bin /usr/local/sbin/ /usr/local/CrossPack-AVR/bin /usr/bin /bin /usr/sbin /sbin /usr/local/bin /usr/X11/bin [mqudsi@iqudsi:local/bin]$ ls -l /usr/local/bin (03-26 13:47) total 24280 -rwxr-xr-x 1 mqudsi wheel 18464 May 14 2009 ascii-xfr -rwxr-xr-x 1 mqudsi wheel 12567 Mar 25 04:50 brew -rwxr-xr-x 1 mqudsi wheel 17768 Dec 11 12:41 bsdiff -rwxr-xr-x 1 mqudsi wheel 43024 Mar 28 2009 dumpsexp -rwxr-xr-x 1 mqudsi wheel 280 Sep 10 2009 easy_install -rwxr-xr-x 1 mqudsi wheel 288 Sep 10 2009 easy_install-2.6 -rwxr-xr-x 1 mqudsi wheel 39696 Apr 5 2009 fuse_wait lrwxr-xr-x 1 mqudsi wheel 29 Mar 25 04:53 git -> ../Cellar/git/1.7.0.3/bin/git [mqudsi@iqudsi:local/bin]$ /usr/local/bin/git (03-26 13:47) zsh: no such file or directory: /usr/local/bin/git Clearly the link is there, but I'm not able to get it to it :S

    Read the article

  • Compiling Java code in terminal having a Jar in CLASSPATH

    - by Masi
    How can you compile the code using javac in a terminal by using google-collections in CLASSPATH? Example of code trying to compile using javac in a terminal (works in Eclipse) import com.google.common.collect.BiMap; import com.google.common.collect.HashBiMap; public class Locate { ... BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ... } Compiling in terminal src 288 % javac Locate.java Locate.java:14: package com.google.common.collect does not exist import com.google.common.collect.BiMap; ^ Locate.java:15: package com.google.common.collect does not exist import com.google.common.collect.HashBiMap; ^ Locate.java:153: cannot find symbol symbol : class BiMap location: class Locate BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ^ Locate.java:153: cannot find symbol symbol : variable HashBiMap location: class Locate BiMap<MyFile, Integer> rankingToResult = HashBiMap.create(); ^ 4 errors My CLASSPATH src 289 % echo $CLASSPATH /u/1/bin/javaLibraries/google-collect-1.0.jar

    Read the article

  • Kickstart CentOS 6 prompting for TCP/IP with network set to DHCP

    - by Andy Shinn
    I am trying to stop my kickstart CentOS install prompting me for TCP/IP information. After I click through this prompt (keeping IPv4 and IPv6 to their defaults) the installation continues and completes just fine. Below is my kickstart file: # Andy's super awesome VM kickstart file install url --url=http://mirrors.kernel.org/centos/6/os/x86_64 lang en_US.UTF-8 keyboard us text %include /tmp/network.ks rootpw --iscrypted $6$RA8DyrNTsVJkGIgY$ohZ62HHiOjNnn1yDMZlIu3lQ63D3plGPcbVZtPKE8Oq6Z.IGUgN.kNLkxs/ZymZuluRDWsW2eey5zLOl2G3mp. firewall --service=ssh authconfig --enableshadow --passalgo=sha512 selinux --disabled timezone America/Los_Angeles bootloader --location=mbr --driveorder=vda --append="crashkernel=auto rhgb quiet" # The following is the partition information you requested # Note that any partitions you deleted are not expressed # here so unless you clear all partitions first, this is # not guaranteed to work zerombr clearpart --all --drives=vda --initlabel part /boot --fstype=ext4 --size=500 part pv.253002 --grow --size=1 volgroup vg1 --pesize=4096 pv.253002 logvol / --fstype=ext4 --name=lv_root --vgname=vg1 --grow --size=1024 --maxsize=51200 logvol swap --name=lv_swap --vgname=vg1 --grow --size=4032 --maxsize=4032 repo --name="CentOS" --baseurl=http://mirrors.kernel.org/centos/6/os/x86_64 --cost=100 repo --name="Puppet Labs Products" --baseurl=http://yum.puppetlabs.com/el/6/products/x86_64 repo --name="Puppet Labs Dependencies" --baseurl=http://yum.puppetlabs.com/el/6/dependencies/x86_64 repo --name="EyeFi" --baseurl=http://flexo.eye.fi/6/eye-fi-api %packages @core @server-policy puppet facter %end %pre --erroronfail #!/bin/bash for x in `cat /proc/cmdline`; do case $x in SERVERNAME*) eval $x echo "network --onboot yes --device eth0 --bootproto dhcp --hostname ${SERVERNAME}.eye.fi" /tmp/network.ks ;; esac; done %end %post puppet agent --waitforcert 10 --onetime --no-daemon --pluginsync --server puppet.eye.fi %end reboot My kernel arguments are in this following virt-install command that I use to start the install: virt-install -n zabbix -r 2048 --vcpus=2 -l http://mirrors.kernel.org/centos/6/os/x86_64 --disk /dev/vg_inf1/zabbix --network bridge=br85 --initrd-inject=/home/ashinn/vm_kickstart --extra-args "ks=file:/vm_kickstart SERVERNAME=zabbix" --autostart During the install, I can pull up a console on the second terminal and verify the contents of /tmp/network.ks are: network --onboot=yes --bootproto=dhcp --ipv6=auto --hostname=jenkins2.mydomain.com Why might Anaconda be prompting for the TCP/IP settings when they are already set to DHCP?

    Read the article

  • Cannot install mercurial properly - PYTHONPATH error

    - by evident
    Hi, I have a server running on Ubuntu 10.04 on which I wanted to install Mercurial via % sudo apt-get install mercurial It seems to have installed successfully and doesn't show me any error messages. But when I try it I get: % hg abort: couldn't find mercurial libraries in [/usr/bin /usr/lib/python2.6 /usr/lib/python2.6/plat-linux2 /usr/lib/python2.6/lib-tk /usr/lib/python2.6/lib-old /usr/lib/python2.6/lib-dynload /usr/lib/python2.6/dist-packages /usr/lib/pymodules/python2.6 /usr/local/lib/python2.6/dist-packages] (check your install and PYTHONPATH) I've googled for a while now and found some sites with the same problem but I still have no idea on how to fix it since it nowhere really says what I need to look for or what I need to add to my PYTHONPATH... By the way, right now my PYTHONPATH seems to be empty: % echo $PYTHONPATH % This is what I get if I look into my /usr/lib/ directory for mercurial: % find /usr/lib/py* -name 'mercurial*' /usr/lib/pymodules/python2.6/mercurial /usr/lib/pymodules/python2.6/mercurial-1.4.3.egg-info /usr/lib/pyshared/python2.6/mercurial Can anybody please help me with that? What (and how) should I set my PYTHONPATH to? I already tried reinstalling, installing with "easy_install mercurial" or with "aptitude reinstall mercurial" but nothing helped. I always get this same error. Would be great if anyone could help... thanks! ADDITION: Building from scratch didn't work out well... when I am logged in as root I can use hg, but when I access with my normal user I get: % hg Traceback (most recent call last): File "/usr/local/bin/hg", line 4, in <module> import pkg_resources File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 2659, in <module> parse_requirements(__requires__), Environment() File "/usr/lib/python2.6/dist-packages/pkg_resources.py", line 546, in resolve raise DistributionNotFound(req) pkg_resources.DistributionNotFound: mercurial==1.7.2

    Read the article

  • Optimal file system type and mount options for an rsnapshot dedicated drive

    - by Nimmy Lebby
    We have an external USB 2 drive that we are using as a backup drive for our configuration. We use rsnapshot for the backups. It uses a few standard commands for managing snapshots: rm -rf: deletes expired snapshots mv: moves older snapshots down a slot cp -al: duplicates last snapshot to new slot rsync -a --delete --numeric-ids --relative: synchronizes new snapshot As you could see by the log below, the majority of the time is spent on the rm -rf and the cp -al steps: [25/Dec/2010:14:00:02] rsnapshot hourly: started [25/Dec/2010:14:00:02] echo 21012 > /var/run/rsnapshot.pid [25/Dec/2010:14:00:02] rm -rf /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.4/ /mnt/extdrive/snapshots/hourly.5/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.3/ /mnt/extdrive/snapshots/hourly.4/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.2/ /mnt/extdrive/snapshots/hourly.3/ [25/Dec/2010:14:15:48] mv /mnt/extdrive/snapshots/hourly.1/ /mnt/extdrive/snapshots/hourly.2/ [25/Dec/2010:14:15:48] cp -al /mnt/extdrive/snapshots/hourly.0 /mnt/extdrive/snapshots/hourly.1 [25/Dec/2010:14:23:32] rsync -a --delete --numeric-ids --relative /etc /mnt/extdrive/snapshots/hourly.0/sm4/ [25/Dec/2010:14:23:52] touch /mnt/extdrive/snapshots/hourly.0/ [25/Dec/2010:14:23:52] rm -f /var/run/rsnapshot.pid [25/Dec/2010:14:23:52] rsnapshot hourly: completed successfully My questions: I'm currently using ext4 for the filesystem. Maybe this is not the best choice from those available in Red Hat. Anyone have any recommendations that would speed up the process? The partition's mount options are sync,dirsync 1 2. Is there a way to optimize this since it's solely used for rsnapshot? Of course, reasoning would be greatly appreciated.

    Read the article

  • Immediately tell which output was sent to stderr

    - by Clinton Blackmore
    When automating a task, it is sensible to test it first manually. It would be helpful, though, if any data going to stderr was immediately recognizeable as such, and distinguishable from the data going to stdout, and to have all the output together so it is obvious what the sequence of events is. One last touch that would be nice is if, at program exit, it printed its return code. All of these things would aid in automating. Yes, I can echo the return code when a program finishes, and yes, I can redirect stdout and stderr; what I'd really like it some shell, script, or easy-to-use redirector that shows stdout in black, shows stderr interleaved with it in red, and prints the exit code at the end. Is there such a beast? [If it matters, I'm using Bash 3.2 on Mac OS X]. Update: Sorry it has been months since I've looked at this. I've come up with a simple test script: #!/usr/bin/env python import sys print "this is stdout" print >> sys.stderr, "this is stderr" print "this is stdout again" In my testing (and probably due to the way things are buffered), rse and hilite display everything from stdout and then everything from stderr. The fifo method gets the order right but appears to colourize everything following the stderr line. ind complained about my stdin and stderr lines, and then put the output from stderr last. Most of these solutions are workable, as it is not atypical for only the last output to go to stderr, but still, it'd be nice to have something that worked slightly better.

    Read the article

  • Script launching 3 copies of rsync

    - by organicveggie
    I have a simple script that uses rsync to copy a Postgres database to a backup location for use with Point In Time Recovery. The script is run every 2 hours via a cron job for the postgres user. For some strange reason, I can see three copies of rsync running in the process list. Any ideas why this might the case? Here's the cron entry: # crontab -u postgres -l PATH=/bin:/usr/bin:/usr/local/bin 0 */2 * * * /var/lib/pgsql/9.0/pitr_backup.sh And here's the ps list, which shows two copies of rsync running and one sleeping: # ps ax |grep rsync 9102 ? R 2:06 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9103 ? S 0:00 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log 9104 ? R 2:51 rsync -avW /var/lib/pgsql/9.0/data/ /var/lib/pgsql/9.0/backups/pitr_archives/20110629100001/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log And here's the uber simple script that seems to be the cause of the problem: #!/bin/sh LOG="/var/log/pgsql-pitr-backup.log" base_backup_dir="/var/lib/pgsql/9.0/backups" wal_archive_dir="$base_backup_dir/wal_archives" pitr_archive_dir="$base_backup_dir/pitr_archives" timestamp=`date +%Y%m%d%H%M%S` backup_dir="$pitr_archive_dir/$timestamp" mkdir -p $backup_dir echo `date` >> $LOG /usr/bin/psql -U postgres -c "SELECT pg_start_backup('$backup_dir');" rsync -avW /var/lib/pgsql/9.0/data/ $backup_dir/ --exclude pg_xlog --exclude recovery.conf --exclude recovery.done --exclude pg_log /usr/bin/psql -U postgres -c "SELECT pg_stop_backup();"

    Read the article

  • Deployment and Ownership issues

    - by kylemac
    As an extreme newbie, I am having difficulty managing ownership and permissions on my first box. What I can't figure out is how to deploy using one user, we will call him deploy and operate my php application with www-data user. Currently as it stands, I know my server runs as www-data through this function <?php echo(exec("whoami")); ?> but I am having to chown between deploy and www-data every time I deploy. There has got to be an easier way to deploy with one user and still run as www-data. EDIT: Here is the output from ls- l on the folder in question. You will see user deploy and group www-pub, the group is from an attempt to add the two different users to a new group and chown one of them in the hopes that they both would have the permissions (newb alert) drwxrwxr-x 4 deploy www-pub 4096 Mar 7 01:41 example.com I am using capistrano for deployment under the user deploy then once its done i chown to www-data, otherwise I can't use php to manipulate files. I am also unsure how to even change which user apache is running.

    Read the article

  • How do I use a Minitel terminal as a linux dumb terminal

    - by Pawz
    I recently purchased a US version of the Alcatel Minitel terminal. I think it's a 1B version. Pictures of it here: I tried connecting a null modem to the 25 pin port on the back and plugging it into a linux box running agetty but I couldn't get it to show any signs of being connected. I used Google Translate to translate this document into English: http://mirabellug.org/wikini/upload/Documentations_minitel.pdf As far as I can tell, you take it out of videotex mode by typing Fcnt-T A, then turn off local echo with Fcnt-T E, then set it to 4800 baud with Fcnt-P 4. I presume Fcnt refers to the "CTRL" key on my terminal. But I think I'm doing something wrong, because it doesn't look like it's recognising the keystrokes, because "Fcnt-T A" just prints the letter A to the screen, which is not what you'd expect a function key combo to do. Has anyone used these minitel terminals as a linux terminal, and if so, please can you share how to configure the minitel to run as a terminal ? Is the 25 pin plug even the correct port to use ? I read something online that indicated you're supposed to use the 5 pin DIN plug instead, is that right ? If so, what's the 25 pin plug for ? If I am supposed to use the DIN plug, does anyone know the pinouts so I can make a cable ?

    Read the article

  • Mysql master-master not replicating

    - by frankil
    I'm setting up a master-master mysql replication on two servers (db1 and db2). I started with setting up db2 as a slave to db1 and that works fine. But when I set up db1 as a slave to db2 it isn't replicating. On the face of it everything looks fine but the data isn't replicating. There are no errors in either of the error logs. The slave status is updating the bin log position. I have used mysqlbinlog to examine both the binlog on the db2 and the relay log on db1 and all of the queries are going in there, but not being executed to db1. "show slave status" on both servers shows that both the slave io and sql threads are "Yes" and that the relay log position is updated by the sql thread. Also on both servers: >echo "show processlist" | mysql | grep "system user" 166819 system user NULL Connect 3655 Waiting for master to send event NULL 166820 system user NULL Connect 3507 Has read all relay log; waiting for the slave I/O thread to update it NULL Relevant config for db1: server-id = 1 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 1 master-host = db2 master-port = 3306 master-user = slaveuser master-password = *** skip-slave-start sync_binlog = 1 binlog-ignore-db=mysql Config for db2 server-id = 2 log-slave-updates replicate-same-server-id = 0 auto_increment_increment = 4 auto_increment_offset = 2 master-host = db1 master-port = 3306 master-user = slaveuser master-password = *** sync_binlog = 1 relay-log=mysql-relay-bin binlog-ignore-db=mysql What else can I look for to make sure db1 executes the queries from db2?

    Read the article

  • Simulated NAT Traversal on Virtual Box

    - by Sumit Arora
    I have installed virtual box ( with Two virtual Adapters(NAT-type)) - Host (Ubuntu -10.10) - Guest-Opensuse-11.4 . Objective : Trying to simulate all four types of NAT as defined here : https://wiki.asterisk.org/wiki/display/TOP/NAT+Traversal+Testing Simulating the various kinds of NATs can be done using Linux iptables. In these examples, eth0 is the private network and eth1 is the public network. Full-cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source iptables -t nat -A PREROUTING -i eth0 -j DNAT --to-destination Restricted cone iptables -t nat POSTROUTING -o eth1 -p tcp -j SNAT --to-source iptables -t nat POSTROUTING -o eth1 -p udp -j SNAT --to-source iptables -t nat PREROUTING -i eth1 -p tcp -j DNAT --to-destination iptables -t nat PREROUTING -i eth1 -p udp -j DNAT --to-destination iptables -A INPUT -i eth1 -p tcp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p udp -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -A INPUT -i eth1 -p tcp -m state --state NEW -j DROP iptables -A INPUT -i eth1 -p udp -m state --state NEW -j DROP Port-restricted cone iptables -t nat -A POSTROUTING -o eth0 -j SNAT --to-source Symmentric echo "1" /proc/sys/net/ipv4/ip_forward iptables --flush iptables -t nat -A POSTROUTING -o eth1 -j MASQUERADE --random iptables -A FORWARD -i eth1 -o eth0 -m state --state RELATED,ESTABLISHED -j ACCEPT iptables -A FORWARD -i eth0 -o eth1 -j ACCEPT What I did : OpenSuse guest with Two Virtual adapters - eth0 and eth1 -- eth1 with address 10.0.3.15 /eth1:1 as 10.0.3.16 -- eth0 with address 10.0.2.15 now running stund(http://sourceforge.net/projects/stun/) client/server : Server eKimchi@linux-6j9k:~/sw/stun/stund ./server -v -h 10.0.3.15 -a 10.0.3.16 Client eKimchi@linux-6j9k:~/sw/stun/stund ./client -v 10.0.3.15 -i 10.0.2.15 On all Four Cases It is giving same results : test I = 1 test II = 1 test III = 1 test I(2) = 1 is nat = 0 mapped IP same = 1 hairpin = 1 preserver port = 1 Primary: Open Return value is 0x000001 Q-1 :Please let me know If any has ever done, It should behave like NAT as per description but nowhere it working as a NAT. Q-2: How NAT Implemented in Home routers (Usually Port Restricted), but those also pre-configured iptables rules and tuned Linux

    Read the article

  • Postfix won't pipe to PHP file through aliases file

    - by jfreak53
    I'm trying to pipe from postfix to a command. According to Postfix logs it worked, but when I check the command it didn't. This is a fresh postfix install. This is my alias file: # See man 5 aliases for format postmaster: root support: "| /usr/bin/php -q /var/www/pipe/pipe.php" I run sendmail [email protected] then type it and then on a separate line type . and it goes. I check the postfix log /var/log/mail.log and this is what it states: Nov 2 15:32:33 server3 postfix/local[13284]: 42C429E0B5: to=<[email protected]>, relay=local, delay=156, delays=156/0.01/0/0.05, dsn=2.0.0, status=sent (delivered to command: /usr/bin/php -q /var/www/pipe/pipe.php) So according to that it worked, but it doesn't. If I run echo 'text' | /usr/bin/php -q /var/www/pipe/pipe.php it does work just fine. Any ideas what I did wrong? I know piping is working, I originally checked it by running that command above WITHOUT the quotes, so just support: | /usr/bin/php -q /var/www/pipe/pipe.php What it did there was append my email header and all to the file pipe.php. So I know postfix was piping it, but when I put in the quotes it says it's going but it's not according to my script.

    Read the article

  • PHP crashing during oAuth scripts

    - by FunkyChicken
    I just installed Nginx 1.2.4 and PHP 5.4.0 (from svn) (php fpm). CentOs 5.8 64 The problem I have is that PHP crashes the moment I run any social oAuth scripts. I have tried to log into Facebook, Twitter and Google with various scripts that I know work on my other servers. When I load the scripts I get a 502 error from Nginx. And I find these errors in the log: in php-fpm log: WARNING: [pool www] child 23821 exited on signal 11 (SIGSEGV) after 1132.862984 seconds from start in nginx log: ERROR: recv() failed (104: Connection reset by peer) while reading response header from upstream From what I can see, it goes wrong when PHP tries to make a request to any of the oAuth servers. https://github.com/mahmudahsan/PHP-SDK-3.0---Graph-API-base-Facebook-Connect-Tutorial-Source for example is one of the scripts that works perfectly on my other machines, but causes PHP to crash. I found: http://stackoverflow.com/questions/3616191/nginx-php-fpm-502-bad-gateway which seems to be a similar problem, but I cannot find a way to solve it. +++ UPDATE +++ Now I have been doing some debugging in 1 of the scripts that is playing up. If you go to line 808 http://pastebin.com/gSnzRtXb it runs the curl_exec() command. When that is ran, it crashes. If i echo'test';exit; just above that line, it echo's correctly, if i do it below that line, php crashes. Which means it's that line 808 which causes the crash. So I made a very simple script to do some testing: http://pastebin.com/Rshnyhcm which also uses curl_exec, but that runs just fine. So I started to dig deeper into that query from the facebook script to see what values the $opts array contains from line 806. Output of that array is: http://pastebin.com/Cq9ffd3R What the problem is, I still have no clue :(

    Read the article

  • PHP Output buffer flush issue on Apache/Linux

    - by Iiro Vaahtojärvi
    Hi, I'm running into issues with the PHP output buffer flushing on my Linux web server. The output buffer is maintained correctly and all the right data is pushed to it in my code, but the usual flushing mechanisms won't flush it to the browser. I have tried everything posted here: http://php.net/manual/en/function.flush.php but no success so far. I got a small script from php.net to test it: <?php ob_start(); for($i=0;$i<70;$i++) { echo 'printing...<br />'; ob_get_flush(); flush(); usleep(300000); } ?> This should print "printing..." to the browser 70 times, one line every three seconds. This works fine on my other testing environment which is based on Windows (still using apache, XAMPP package), but on my Linux server it doesn't. It waits for the script to finish before giving anything to the browser, basically ignoring the whole flush command. If anyone has experienced this before or knows of anything that could help (be it server configuration or adjustment to code) it would be greatly appreciated!

    Read the article

  • APC uptime 0 because of Fast

    - by demlasjr
    I have a VPS using Parallels/Plesk (11.0.9 Update #22, last updated at Oct 31, 2012 03:33 AM CentOS 6.3 (Final) x86_64) I have apache (CGI/FastCGI) installed and nginx as reverse proxy. Everything is working just fine. I installed APC for caching, but the issue is that the uptime is 0 always. It's restarting each 15 seconds or so. I checked everywhere and can't find a solution to fix it. The server have the grace restart enabled, but every 6 hours, which shouldn't influence the APC uptime. Searching in Google I found that this could be related to Apache, running with FCGId instead of FastCGI. Plesk/Apache is using this config file: usr/local/psa/admin/conf/templates/default/service/php_over_fastcgi.php which content is: <IfModule mod_fcgid.c> <Files ~ (\.php)> SetHandler fcgid-script FCGIWrapper <?php echo $VAR->server->webserver->apache->phpCgiBin ?> .p$ Options +ExecCGI allow from all </Files> Is here the issue or elsewhere ? How can I fix this to work with FastCGI and make APC working properly. I forgot to specify that even if the uptime is below one minute, APC is doing pretty good job caching (92% are hits).

    Read the article

  • Using public interfaces on a server connected through a GRE tunnel

    - by Evan
    I'm pretty new to networking so please forgive any terminology mistakes. I have 2 servers connected with a GRE tunnel. Server1 (10.0.0.1) ---- Server2 (10.0.0.2) I want to be able to bind to the public IPs on Server2 using Server1. To do this, I setup virtual interfaces with Server2's public IPs on Server1 and then used routing rules on Server1 to route the packets through the GRE tunnel. On Server1: ip rule add from [Server2's first public IP] table gre ip rule add from [Server2's second public IP] table gre ip route add default via 10.0.0.2 dev gre1 table gre This works great and I can see the packets arriving via GRE on Server2. I can see the packet exiting the tunnel on Server2's gre1 device as shown: From Server1: ping -I [Server2's public ip] google.com tcpdump from Server2's GRE tunnel device: 12:07:17.029160 IP (tos 0x0, ttl 64, id 0, offset 0, flags [DF], proto ICMP (1), length 84) [Server2's public ip] > 74.125.225.38: ICMP echo request, id 6378, seq 50, length 64 This is exactly the packet I want. However, I'm not seeing it go out at all on eth0:0 (where Server2's public IP is bound to). I've tried to use routing rules to get packets coming from Server2's public IP (which would be coming out of dev gre1) to go through dev eth0 on the public default gateway and that doesn't work either. I'm at a loss, thank you to anyone who can help.

    Read the article

  • Batch file to create many files with special characters

    - by MollyO
    Essential info: I have a file "DB_OUTPUT.TXT" with 304 lines that I need to turn into 304 files (one per line). Each line contains many special characters and may be up to tens of thousands of characters long. For these reasons, I'm having difficulty using a cmd.exe batch file (which limits the amount of input) and the echo command (which would try to execute each special character, short of me having to escape them all). I also have a file "DB_OUTPUT_FILENAMES.TXT" containing a distinct filename for each line-soon-to-be-file from "db_output.txt". So line 1 of DB_OUTPUT.TXT needs to be the body of a new file with a name equal to line 1 of DB_OUTPUT_FILENAMES.TXT. Extra info: As you may have guessed, DB_OUTPUT.TXT is output from a database; it contains 304 records with 6 or 7 columns at a fixed width with the last column being a SQL query. Each of these lines (db records) will be used as a script to create new database objects, which is why the special characters need to be preserved. Question: Is there a way to do this in a batch-like fashion? I'd be happy with either a Windows solution or a Linux one.

    Read the article

  • install zenoss on ubuntu, raise No valid ZENHOME error

    - by bxshi
    I've added an user with name zenoss, and set export ZENHOME=/usr/local/zenoss in ~/.bashrc under /home/zenoss, and when using echo $ZENHOME, it could show /usr/local/zenoss When install zenoss, I switched to zenoss and then run install.sh under zenoss-4.2.0/inst, when it tries to run Tests, the error occured. ------------------------------------------------------- T E S T S ------------------------------------------------------- Running org.zenoss.utils.ZenPacksTest Tests run: 3, Failures: 0, Errors: 3, Skipped: 0, Time elapsed: 0.045 sec <<< FAILURE! Running org.zenoss.utils.ZenossTest Tests run: 3, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.71 sec Results : Tests in error: testGetZenPack(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetPackPath(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. testGetAllPacks(org.zenoss.utils.ZenPacksTest): No valid ZENHOME could be found. Tests run: 6, Failures: 0, Errors: 3, Skipped: 0 [INFO] ------------------------------------------------------------------------ [INFO] Reactor Summary: [INFO] [INFO] Zenoss Core ....................................... SUCCESS [27.643s] [INFO] Zenoss Core Utilities ............................. FAILURE [12.742s] [INFO] Zenoss Jython Distribution ........................ SKIPPED [INFO] ------------------------------------------------------------------------ [INFO] BUILD FAILURE [INFO] ------------------------------------------------------------------------ [INFO] Total time: 40.586s [INFO] Finished at: Wed Sep 26 15:39:24 CST 2012 [INFO] Final Memory: 16M/60M [INFO] ------------------------------------------------------------------------ [ERROR] Failed to execute goal org.apache.maven.plugins:maven-surefire-plugin:2.8:test (default-test) on project utils: There are test failures. [ERROR] [ERROR] Please refer to /home/zenoss/zenoss-4.2.0/inst/build/java/java/zenoss-utils/target/surefire-reports for the individual test results.

    Read the article

  • php_ibm_db2.dll on IIS 7.5 using PHP 5.3 error message

    - by grmbl
    I'm trying to use ibm_db2 extension to access iSeries DB2 database. This is the testcode (taken from here) <?php $database = 'ALI452BFAL'; //library $user = 'STN452'; $password = '**********'; $hostname = 'myserverip'; $port = 50000; $conn_string = "DRIVER={IBM DB2 ODBC DRIVER};DATABASE=$database;" . "HOSTNAME=$hostname;PORT=$port;PROTOCOL=TCPIP;UID=$user;PWD=$password;"; $conn = db2_connect($conn_string, '', ''); if ($conn) { print "ok"; db2_close($conn); } else { echo db2_conn_error() . '<br>' . db2_conn_errormsg(); } ?> I have installed a very basic package containing the db2 driver and added this as an extension. (IBM Data Server Driver for ODBC, CLI, and .NET.msi) This is my result: 08001 [IBM][CLI Driver] SQL30081N A communication error has been detected. Communication protocol being used: "TCP/IP". Communication API being used: "SOCKETS". Location where the error was detected: "10.10.0.120". Communication function detecting the error: "connect". Protocol specific error code(s): "10061", "", "". SQLSTATE=08001 SQLCODE=-30081 Anybody tried this before??

    Read the article

  • Force Juniper-network client to use split routing

    - by craibuc
    I'm using the Juniper client for OSX ('Network Connect') to access a client's VPN. It appears that the client is configured to not use split-routing. The client's VPN host is not willing to enable split-routing. Is there a way for me to over-ride this configuration or do sometime on my workstation to get the non-client network traffic to by-pass the VPN? This wouldn't be a big deal, but none of my streaming radio stations (e.g. XM) work will connected to their VPN. Apologies for any inaccuracies in the terminology. ** edit ** The Juniper client changes my system's resolve.conf file from: nameserver 192.168.0.1 to: search XXX.com [redacted] nameserver 10.30.16.140 nameserver 10.30.8.140 I've attempted to restore my preferred DNS entry to the file $ sudo echo "nameserver 192.168.0.1" >> /etc/resolv.conf but this results in the following error: -bash: /etc/resolv.conf: Permission denied How does the super-user account not have access to this file? Is there a way to prevent the Juniper client from making changes to this file?

    Read the article

  • Testing for disk write

    - by Montecristo
    I'm writing an application for storing lots of images (size <5MB) on an ext3 filesystem, this is what I have for now. After some searching here on serverfault I have decided for a structure of directories like this: 000/000/000000001.jpg ... 236/519/236519107.jpg This structure will allow me to save up to 1'000'000'000 images as I'll store a max of 1'000 images in each leaf. I've created it, from a theoretical point of view seems ok to me (though I've no experience on this), but I want to find out what will happen when there will be directories full of files in there. A question about creating this structure: is it better to create it all in one go (takes approx 50 minutes on my pc) or should I create directories as they are needed? From a developer point of view I think the first option is better (no extra waiting time for the user), but from a sysadmin point of view, is this ok? I've thought I could do as if the filesystem is already under the running application, I'll make a script that will save images as fast as it can, monitoring things as follows: how much time does it take for an image to be saved when there is no or little space used? how does this change when the space starts to be used up? how much time does it take for an image to be read from a random leaf? Does this change a lot when there are lots of files? Does launching this command sync; echo 3 | sudo tee /proc/sys/vm/drop_caches has any sense at all? Is this the only thing I have to do to have a clean start if I want to start over again with my tests? Do you have any suggestions or corrections?

    Read the article

  • How can I both pipe and display output in Windows' command line?

    - by Bob
    I have a process I need to run within a batch file. This process produces some output. I need to both display this output to the screen and send (pipe) it to another program. The bash method uses tee: echo 'ee' | tee /dev/tty | foo Is there an equivalent for Windows? I am happy to use PowerShell if necessary. There are tee ports for Windows, but there does not appear to be an equivalent for /dev/tty, which complicates matters. The specific use-case here: I have a program (launch4j) that I need to run, displaying output to the user. At the same time, I need to be able to detect success or failure in the script. Unfortunately, this program does not set an exit code, and I cannot force it to do so. My current workaround involves piping to find, to search the output (launch4j config.xml | find "Successfully created") - however, that swallows the output I need to display. Therefore, I need some way to both display to the screen and send the ouput to a command - and this command should be able to set ERRORLEVEL (it cannot run asynchronously).

    Read the article

  • Natting trafic from a tunnel to internet

    - by mezgani
    I'm trying to set up a GRE tunnel between a linux box and a router (LAN), and I'm having a few problems which seem to depend to my iptables configuration. Watching with tcpdump on linux box, I can see packets coming with flags GREv0, all i need right know is forwarding this data to internet, found here some trace : iptables -F iptables -X iptables -P INPUT ACCEPT iptables -P FORWARD ACCEPT iptables -P OUTPUT ACCEPT iptables -t nat -F iptables -t nat -X iptables -t nat -P PREROUTING ACCEPT iptables -t nat -P POSTROUTING ACCEPT iptables -t nat -P OUTPUT ACCEPT iptables -t mangle -F iptables -t mangle -X iptables -t mangle -P PREROUTING ACCEPT iptables -t mangle -P OUTPUT ACCEPT iptables -A INPUT -p 47 -j ACCEPT iptables -A FORWARD -i ppp0 -o cloud -j ACCEPT iptables -A FORWARD -i cloud -o ppp0 -j ACCEPT iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT iptables -t nat -A POSTROUTING -o ppp0 -j MASQUERADE echo "1" /proc/sys/net/ipv4/ip_forward cloud Link encap:UNSPEC HWaddr C4-CE-7A-2E-F2-BF-DD-C0-00-00-00-00-00-00-00-00 inet adr:10.3.3.3 P-t-P:10.3.3.3 Masque:255.255.255.255 UP POINTOPOINT RUNNING NOARP MTU:1476 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:124 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 lg file transmission:0 RX bytes:0 (0.0 B) TX bytes:10416 (10.1 KiB) Table de routage IP du noyau Destination Passerelle Genmask Indic MSS Fenêtre irtt Iface 196.206.120.1 0.0.0.0 255.255.255.255 UH 0 0 0 ppp0 192.168.0.0 0.0.0.0 255.255.255.0 U 0 0 0 eth0 10.3.3.0 0.0.0.0 255.255.255.0 U 0 0 0 cloud 0.0.0.0 196.206.120.1 0.0.0.0 UG 0 0 0 ppp0 root@aldebaran:~# ip route 196.206.120.1 dev ppp0 proto kernel scope link src 196.206.122.46 192.168.0.0/24 dev eth0 proto kernel scope link src 192.168.0.18 10.3.3.0/24 dev cloud scope link default via 196.206.120.1 dev ppp0

    Read the article

  • Unable to specify parameters to cvlc in a script

    - by VxJasonxV
    I'm creating a script that issues a few curl commands in order to access a time-protected mms stream link, then set up a relay using cvlc (vlc's command line interface) for my own use on an unencumbered player. The curl aspect of this is working, as I can run as a browser and curl side by side and get the same access url. (It's time locked meaning the stream will work forever, but you have to connect quickly or the URL will time out.) The very end of the script prints the command I will run, which is then followed up by "exec $CMD". When I echo $CMD I get: cvlc --sout '#standard{access=http,mux=asf,dst=0.0.0.0:58194}' mms://[...] Manually Copy/Pasting this command in, verbatim, works perfectly fine, but as part of a script, the cvlc execution output says: [0x9743d0] main interface error: no suitable interface module [0x962120] main libvlc error: interface "globalhotkeys,none" initialization failed [0x9743d0] dummy interface: using the dummy interface module... [0xb16e30] stream_out_standard stream out error: no mux specified or found by extension [0xb16ad0] main stream output error: stream chain failed for `standard{mux="",access="",dst="'#standard{access=http,mux=asf,dst=0.0.0.0:58194}'"}' [0xb11cd0] main input error: cannot start stream output instance, aborting [0xb11f70] signals interface error: Caught Interrupt signal, exiting... Why is --sout behaving one way in a script (non-interactive shell?) vs. another way in the foreground (interactive shell) ?

    Read the article

< Previous Page | 270 271 272 273 274 275 276 277 278 279 280 281  | Next Page >