Search Results

Search found 89523 results on 3581 pages for 'sh code'.

Page 871/3581 | < Previous Page | 867 868 869 870 871 872 873 874 875 876 877 878  | Next Page >

  • Is this a good starting point for iptables in Linux?

    - by sbrattla
    Hi, I'm new to iptables, and i've been trying to put together a firewall which purpose is to protect a web server. The below rules are the ones i've put together so far, and i would like to hear if the rules makes sense - and wether i've left out anything essential? In addition to port 80, i also need to have port 3306 (mysql) and 22 (ssh) open for external connections. Any feedback is highly appreciated! #!/bin/sh # Clear all existing rules. iptables -F # ACCEPT connections for loopback network connection, 127.0.0.1. iptables -A INPUT -i lo -j ACCEPT # ALLOW established traffic iptables -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # DROP packets that are NEW but does not have the SYN but set. iptables -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # DROP fragmented packets, as there is no way to tell the source and destination ports of such a packet. iptables -A INPUT -f -j DROP # DROP packets with all tcp flags set (XMAS packets). iptables -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # DROP packets with no tcp flags set (NULL packets). iptables -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # ALLOW ssh traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport ssh -m limit --limit 1/s -j ACCEPT # ALLOW http traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport http -m limit --limit 5/s -j ACCEPT # ALLOW mysql traffic (and prevent against DoS attacks) iptables -A INPUT -p tcp --dport mysql -m limit --limit 25/s -j ACCEPT # DROP any other traffic. iptables -A INPUT -j DROP

    Read the article

  • Why is only one wget command working in my crontab?

    - by afEkenholm
    I wish to fetch content from a PHP script on my server two times a day, altering a query variable lang to set what language we want, and save this content in two language specific files. This is my crontab: */15 * * * * ~root/apache.sh > /var/log/checkapache.log 10 0 * * * wget -O /path/to/file-sv.sql "http://mydomain.com/path/?lang=sv" 11 0 * * * wget -O /path/to/file-en.sql "http://mydomain.com/path/?lang=en" The problem is that only the first wget command line is being executed (or to be precise: the only file that is being written is /path/to/file-sv.sql). If I switch the second and the third row, /path/to/file-en.sql gets written instead. The first line always runs as expected, no matter where it is. I then tried using lynx -dump "http://mydomain.com/path/?lang=xx" > /path/to/file-xx.sql to no avail; still only the first lynx line executed successfully. Even mixing wget and lynx did not change this! Getting kinda desperate! Am I missing something? There are thousands of articles on crontab (combined with) wget or lynx, but all seems to cover basic setups and syntax. Does anyone got a clue of what I am doing wrong? Thanks, Alexander

    Read the article

  • Is there any functional-like unix shell?

    - by Caruccio
    I'm (really) newbie to functional programming (in fact only had contact with it using python) but seems to be a good approach for some list-intensive tasks in a shell environment. I'd love to do something like this: $ [ git clone $host/$repo for repo in repo1 repo2 repo3 ] Is there any Unix shell with these kind of feature? Or maybe some feature to allow easy shell access (commands, env/vars, readline, etc...) from within python (the idea is to use python's interactive interpreter as a replacement to bash). EDIT: Maybe a comparative example would clarify. Let's say I have a list composed of dir/file: $ FILES=( build/project.rpm build/project.src.rpm ) And I want to do a really simple task: copy all files to dist/ AND install it in the system (it's part of a build process): Using bash: $ cp ${files[*]} dist/ $ cd dist && rpm -Uvh $(for f in ${files[*]}; do basename $f; done)) Using a "pythonic shell" approach (caution: this is imaginary code): $ cp [ os.path.join('dist', os.path.basename(file)) for file in FILES ] 'dist' Can you see the difference ? THAT is what i'm talking about. How can not exits a shell with these kind of stuff build-in yet? It's a real pain to handle lists in shell, even its being a so common task: list of files, list of PIDs, list of everything. And a really, really, important point: using syntax/tools/features everybody already knows: sh and python. IPython seams to be on a good direction, but it's bloated: if var name starts with '$', it does this, if '$$' it does that. It's syntax is not "natural", so many rules and "workarounds" ([ ln.upper() for ln in !ls ] -- syntax error)

    Read the article

  • memory usage setting

    - by user127610
    everybody,the memory usage is too much,what can i do? top - 12:54:37 up 7 days, 4:38, 1 user, load average: 0.00, 0.00, 0.00 Tasks: 18 total, 2 running, 16 sleeping, 0 stopped, 0 zombie Cpu(s): 0.0%us, 0.0%sy, 0.0%ni,100.0%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 1048800k total, 917424k used, 131376k free, 0k buffers Swap: 0k total, 0k used, 0k free, 0k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 1 root 15 0 2840 1364 1204 S 0.0 0.1 0:02.17 init 1161 root 14 -4 2320 600 420 S 0.0 0.1 0:00.00 udevd 1391 root 18 0 35512 1288 948 S 0.0 0.1 0:03.53 rsyslogd 1409 root 15 0 8432 1164 700 S 0.0 0.1 0:03.87 sshd 1416 root 18 0 3156 868 692 S 0.0 0.1 0:00.00 xinetd 1423 root 18 0 8672 716 292 S 0.0 0.1 0:00.00 saslauthd 1424 root 18 0 8672 488 64 S 0.0 0.0 0:00.00 saslauthd 1431 root 15 0 7020 1168 616 S 0.0 0.1 0:00.99 crond 1450 root 25 0 6236 1444 1228 S 0.0 0.1 0:00.05 sh 3328 mysql 15 0 799m 42m 4892 S 0.0 4.1 0:02.07 mysqld 15479 root 15 0 11304 3332 2688 R 0.0 0.3 0:00.06 sshd 15482 root 15 0 6372 1688 1404 S 0.0 0.2 0:00.00 bash 15497 root 15 0 2536 1044 864 R 0.0 0.1 0:00.00 top 20137 www 15 0 20672 14m 864 S 0.0 1.4 0:00.87 nginx 22351 www 16 0 52324 26m 9244 S 0.0 2.6 0:13.94 php-fpm 24231 www 16 0 51928 25m 9260 S 0.0 2.5 0:13.52 php-fpm 32682 root 15 0 35832 3228 864 S 0.0 0.3 0:02.18 php-fpm 32686 root 18 0 7368 1616 888 S 0.0 0.2 0:00.00 nginx

    Read the article

  • The script not working as expected files dump path

    - by user3319390
    I have a script needs to be dump matching cname from my file contains and then matching scode to dump file to $cname/$year/$month/$day/ into files like access and error logs #!/bin/sh #base_dir="/home/vizion/Desktop" path="/home/vizion/Desktop/adn_DF9D_20140515_0005.log" name=$(basename "$path" ".log") for x in *.log; do year=${x:9:4}; month=${x:13:2}; day=${x:15:2}; done while read -r line do cname=$(echo ${line} | awk '{split($7,c,"/"); print c[3]}') scode=$(echo ${line} | awk -F"[ ]" '{print $9}') [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" [[ ( ${scode} -ge 200 ) && ( ${scode} -le 399 ) ]] && { # [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day/" echo ${line} >> /home/vizion/Desktop/$cname/$year/$month/$day/${cname}_${name}_access.log } [[ ( ${scode} -ge 400 ) && ( ${scode} -le 599 ) ]] && { [[ ! -d "$cname/$year/$month/$day" ]] && mkdir -p "$cname/$year/$month/$day" echo ${line} >> ${cname}_${name}_error.log } done < $path i am able to filter logs but not not dumping the exact location It's going other locations suggest to me correction in script

    Read the article

  • How to parse a CSV file containing serialized PHP? [migrated]

    - by garbetjie
    I've just started dabbling in Perl, to try and gain some exposure to different programming languages - so forgive me if some of the following code is horrendous. I needed a quick and dirty CSV parser that could receive a CSV file, and split it into file batches containing "X" number of CSV lines (taking into account that entries could contain embedded newlines). I came up with a working solution, and it was going along just fine. However, as one of the CSV files that I'm trying to split, I came across one that contains serialized PHP code. This seems to break the CSV parsing. As soon as I remove the serialization - the CSV file is parsed correctly. Are there any tricks I need to know when it comes to parsing serialized data in CSV files? Here is a shortened sample of the code: use strict; use warnings; my $csv = Text::CSV_XS->new({ eol => $/, always_quote => 1, binary => 1 }); my $out; my $in; open $in, "<:encoding(utf8)", "infile.csv" or die("cannot open input file $inputfile"); open $out, ">outfile.000"; binmode($out, ":utf8"); while (my $line = $csv->getline($in)) { $lines++; $csv->print($out, $line); } I'm never able to get into the while loop shown above. As soon as I remove the serialized data, I suddenly am able to get into the loop. Edit: An example of a line that is causing me trouble (taken straight from Vim - hence the ^M): "26","other","1","20,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","37","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","6","1" "27","other","1","35,000 Subscriber Plan","Some test here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","38","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","7","1" "28","other","1","50,000 Subscriber Plan","Some text here.^M\ Some more text","on","","18","","0","","0","0","recurring","0","","payment","totalsend","0","tsadmin","R34bL9oq","39","0","0","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:18:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"73\";i:17;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","8","1""73","other","8","10,000,000","","","","0","","0","","0","0","recurring","0","","payment","","0","","","75","0","10000000","","","","","","","","","","","","","","","","","","","","","","","0","0","0","a:17:{i:0;s:1:\"3\";i:1;s:1:\"2\";i:2;s:2:\"59\";i:3;s:2:\"60\";i:4;s:2:\"61\";i:5;s:2:\"62\";i:6;s:2:\"63\";i:7;s:2:\"64\";i:8;s:2:\"65\";i:9;s:2:\"66\";i:10;s:2:\"67\";i:11;s:2:\"68\";i:12;s:2:\"69\";i:13;s:2:\"70\";i:14;s:2:\"71\";i:15;s:2:\"72\";i:16;s:2:\"74\";}","","","0","0","","0","0","0.0000","0.0000","0","","","0.00","","14","0"

    Read the article

  • S3sync not working

    - by user57833
    Hello, I managed to get s3sync to upload my test folder to Amazon S3 and can see it in the MWS Managment Console. Downloading the data back to a test folder results in the following error message: root@mybucketname:/var/s3sync# ./week_download.sh s3Prefix backups/weekly localPrefix /var/s3sync/testdown/weekly s3TreeRecurse mybucketname backups/weekly Creating new connection Trying command list_bucket mybucketname prefix backups/weekly max-keys 200 delimiter / with 100 retries le ft Response code: 200 prefix found: / s3TreeRecurse mybucketname backups/weekly / Trying command list_bucket mybucketname prefix backups/weekly/ max-keys 200 delimiter / with 100 retries l eft Response code: 200 S3 item backups/weekly/ s3 node object init. Name: Path:backups/weekly Size:0 Tag:d41d8cd98f00b204e9800998ecf8427e Date:Fri O ct 29 14:21:53 UTC 2010 local node object init. Name: Path:/var/s3sync/testdown/weekly/ Size: Tag: Date: source: dest: Update node s3sync.rb:638:in initialize': No such file or directory - /var/s3sync/testdown/weekly/.s3syncTemp (E rrno::ENOENT) from s3sync.rb:638:inopen' from s3sync.rb:638:in updateFrom' from s3sync.rb:393:inmain' from s3sync.rb:735 I am using the following download script: !/bin/bash script to download local directory upto s3 cd /var/s3sync/ export AWS_ACCESS_KEY_ID=nothing to see here export AWS_SECRET_ACCESS_KEY=nothing to see here export SSL_CERT_DIR=/var/s3sync/certs ruby s3sync.rb -r -v -d --progress --make-dirs mybucket:backups/weekly /var/s3sync/testdown copy and modify line above for each additional folder to be synced Any idea's? Does the download script need to download to the source of Amazon S3 i.e testup folder? Was hoping on the instance of a complete failure and the original folders won't exist that it would just download everything from me. Note: changed my bucket names to "mybucketname" so that it is not public!

    Read the article

  • PassEnv does not find ENV variables

    - by quodlibetor
    I've got this /etc/profile.d/myfile.sh: export MYVAR=myval I also have a PassEnv MYVAR line in a <virtualhost> section of an apache conf dir. That lets me do things like: $ echo $MYVAR myval $ python >>> import os; os.getenv('MYVAR') 'myval' $ sudo echo $MYVAR myval $ sudo -i root# echo $MYVAR myval But then, despite that being the case I get: root# /sbin/service httpd restart /sbin/service httpd restart Stopping httpd: [ OK ] Starting httpd: [Mon Oct 22 14:44:02 2012] [warn] PassEnv variable MYVAR was undefined [ OK ] And all of my attempts to access MYVAR from within my wsgi scripts just don't work. Thoughts? Am I doing something obviously wrong? EDIT for more detail I've got a swarm of computers/VMs and a swarm of developers working on a swarm of projects. I need a simple central place to keep environment information, the most common is the "environment" (dev/stage/prod). The scheme that we've got (modifying *.wsgi programmatically) is turning out to be more fragile than we'd like. The main options that I see are: put things in the shell environment put things in other config files Getting things into the shell environment is the best, because we won't need to write yet more duplicated "what is my environment" code.

    Read the article

  • Allow outgoing connections for DNS

    - by Jimmy
    I'm new to IPtables, but I am trying to setup a secure server to host a website and allow SSH. This is what I have so far: #!/bin/sh i=/sbin/iptables # Flush all rules $i -F $i -X # Setup default filter policy $i -P INPUT DROP $i -P OUTPUT DROP $i -P FORWARD DROP # Respond to ping requests $i -A INPUT -p icmp --icmp-type any -j ACCEPT # Force SYN checks $i -A INPUT -p tcp ! --syn -m state --state NEW -j DROP # Drop all fragments $i -A INPUT -f -j DROP # Drop XMAS packets $i -A INPUT -p tcp --tcp-flags ALL ALL -j DROP # Drop NULL packets $i -A INPUT -p tcp --tcp-flags ALL NONE -j DROP # Stateful inspection $i -A INPUT -m state --state NEW -p tcp --dport 22 -j ACCEPT # Allow established connections $i -A INPUT -m state --state ESTABLISHED,RELATED -j ACCEPT # Allow unlimited traffic on loopback $i -A INPUT -i lo -j ACCEPT $i -A OUTPUT -o lo -j ACCEPT # Open nginx $i -A INPUT -p tcp --dport 443 -j ACCEPT $i -A INPUT -p tcp --dport 80 -j ACCEPT # Open SSH $i -A INPUT -p tcp --dport 22 -j ACCEPT However I've locked down my outgoing connections and it means I can't resolve any DNS. How do I allow that? Also, any other feedback is appreciated. James

    Read the article

  • How to configure iptables to use apt-get in a server?

    - by segaco
    I'm starting using iptables (newbie) to protect a linux server (specifically Debian 5.0). Before I configure the iptables settings, I can use apt-get without a problem. But after I configure the iptables, the apt-get stop working. For example I use this script in iptables: #!/bin/sh IPT=/sbin/iptables ## FLUSH $IPT -F $IPT -X $IPT -t nat -F $IPT -t nat -X $IPT -t mangle -F $IPT -t mangle -X $IPT -P INPUT DROP $IPT -P OUTPUT DROP $IPT -P FORWARD DROP $IPT -A INPUT -i lo -j ACCEPT $IPT -A OUTPUT -o lo -j ACCEPT $IPT -A INPUT -p tcp --dport 22 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 22 -j ACCEPT $IPT -A INPUT -p tcp --dport 80 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 80 -j ACCEPT $IPT -A INPUT -p tcp --dport 443 -j ACCEPT $IPT -A OUTPUT -p tcp --sport 443 -j ACCEPT # Allow FTP connections @ port 21 $IPT -A INPUT -p tcp --sport 21 -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 21 -m state --state NEW,ESTABLISHED -j ACCEPT # Allow Active FTP Connections $IPT -A INPUT -p tcp --sport 20 -m state --state ESTABLISHED,RELATED -j ACCEPT $IPT -A OUTPUT -p tcp --dport 20 -m state --state ESTABLISHED -j ACCEPT # Allow Passive FTP Connections $IPT -A INPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED -j ACCEPT $IPT -A OUTPUT -p tcp --sport 1024: --dport 1024: -m state --state ESTABLISHED,RELATED -j ACCEPT #DNS $IPT -A OUTPUT -p udp --dport 53 --sport 1024:65535 -j ACCEPT $IPT -A INPUT -p tcp --dport 1:1024 $IPT -A INPUT -p udp --dport 1:1024 $IPT -A INPUT -p tcp --dport 3306 -j DROP $IPT -A INPUT -p tcp --dport 10000 -j DROP $IPT -A INPUT -p udp --dport 10000 -j DROP then when I run apt-get I obtain: core:~# apt-get update 0% [Connecting to ftp.us.debian.org] [Connecting to security.debian.org] [Conne and it stalls. What rules I need to configure to make it works. Thanks

    Read the article

  • How do I get Tomcat 7 to start up faster in Linux CentOS kernel version 2.6.18?

    - by user1786833
    I am experiencing a problem with slow start up times for Tomcat 7. I have done some testing by tweaking configuration parameters both on Linux CentOS kernel version 2.6.18 and on Windows 7 using this link as my primary guide: http://wiki.apache.org/tomcat/HowTo/FasterStartUp and managed only a modest improvement. The improvements seemed to result when I added metadata-complete="true" attribute to the element of my WEB-INF/web.xml file and when I added the names of almost all the jars we use for our application to the tomcat.util.scan.DefaultJarScanner.jarsToSkip property in conf/catalina.properties file. I've also used this JAVA_OPTS in the setenv.sh file: JAVA_OPTS="$JAVA_OPTS -server -Xms1536m -Xmx1536m -XX:MaxPermSize=256m -XX:NewRatio=2 -XX:+UseParallelGC -XX:ParallelGCThreads=2 -Dsun.rmi.dgc.client.gcInterval=1800000 -Dsun.rmi.dgc.server.gcInterval=1800000 -Dorg.apache.jasper.runtime.BodyContentImpl.LIMIT_BUFFER=true " but actually saw my start up times increase slightly. Our QA and production environments are on Linux CentOS so I'm hoping to get more information on improving Tomcat 7 start up times in that environment. My primary role is java developer and I don't have much system administration experience so I appreciate any input. Thank you for your time and suggestions.

    Read the article

  • Appropriate design / technologies to handle dynamic string formatting?

    - by Mark W
    recently I was tasked with implementing a way of adding support for versioning of hardware packet specifications to one of our libraries. First a bit of information about the project. We have a hardware library which has classes for each of the various commands we support sending to our hardware. These hardware modules are essentially just lights with a few buttons, and a 2 or 4 digit display. The packets typically follow the format {SOH}AADD{ETX}, where AA is our sentinel action code, and DD is the device ID. These packet specs are different from one command to the next obviously, and the different firmware versions we have support different specifications. For example, on version 1 an action code of 14 may have a spec of {SOH}AADDTEXT{ETX} which would be AA = 14 literal, DD = device ID, TEXT = literal text to display on the device. Then we come out with a revision with adds an extended byte(s) onto the end of the packet like this {SOH}AADDTEXTE{ETX}. Assume the TEXT field is fixed width for this example. We have now added a new field onto the end which could be used to say specify the color or flash rate of the text/buttons. Currently this java library only supports one version of the commands, the latest. In our hardware library we would have a class for this command, say a DisplayTextArgs.java. That class would have fields for the device ID, the text, and the extended byte. The command class would expose a method which generates the string ("{SOH}AADDTEXTE{ETX}") using the value from the class. In practice we would create the Args class as needed, populate the fields, call the method to get our packet string, then ship that down across the CAN. Some of our other commands specification can vary for the same command, on the same version, depending on some runtime state. For example, another command for version 1 may be {SOH}AA{ETX}, where this action code clears all of the modules behind a specific controller device of their text. We may overload this packet to have option fields with multiple meanings like {SOH}AAOC{ETX} where OC is literal text, which tells the controller to only clear text on a specific module type, and to leave the others alone, or the spec could also have an option format of {SOH}AADD{ETX} to clear the text off a a specific device. Currently, in the method which generates the packet string, we would evaluate fields on the args class to determine which spec we will be using when formatting the packet. For this example, it would be along the lines of: if m_DeviceID != null then use {SOH}AADD{ETX} else if m_ClearOCs == true then use {SOH}AAOC{EXT} else use {SOH}AA{ETX} I had considered using XML, or a database to store String.format format strings, which were linked to firmware version numbers in some table. We would load them up at startup, and pass in the version number of the hardwares firmware we are currently using (I can query the devices for their firmware version, but the version is not included in all packets as part of the spec). This breaks down pretty quickly because of the dynamic nature of how we select which version of the command to use. I then considered using a rule engine to possibly build out expressions which could be interpreted at runtume, to evaluate the args class's state, and from that select the appropriate format string to use, but my brief look at rule engines for java scared me away with its complexity. While it seems like it might be a viable solution, it seems overly complex. So this is why I am here. I wouldn't say design is my strongest skill, and im having trouble figuring out the best way to approach this problem. I probably wont be able to radically change the args classes, but if the trade off was good enough, I may be able to convince my boss that the change is appropriate. What I would like from the community is some feedback on some best practices / design methodologies / API or other resources which I could use to accomplish: Logic to determine which set of commands to use for a given firmware version Of those command, which version of each command to use (based on the args classes state) Keep the rules logic decoupled from the application so as to avoid needing releases for every firmware version Be simple enough so I don't need weeks of study and trial and error to implement effectively.

    Read the article

  • Ubuntu Launcher Items Don't Have Correct Environment Vars under NX

    - by ivarley
    I've got an environment variable issue I'm having trouble resolving. I'm running Ubuntu (Karmic, 9.10) and coming in via NX (NoMachine) on a Mac. I've added several environment variables in my .bashrc file, e.g.: export JAVA_HOME=$HOME/dev/tools/Linux/jdk/jdk1.6.0_16/ Sitting at the machine, this environment variable is available on the command line, as well as for apps I launch from the Main Menu. Coming in over NX, however, the environment variable shows up correctly on the command line, but NOT when I launch things via the launcher. As an example, I created a simple shell script called testpath in my home folder: #!/bin/sh echo $PATH && sleep 5 quit I gave it execute privileges: chmod +x testpath And then I created a launcher item in my Main Menu that simply runs: ./testpath When I'm sitting at the computer, this launcher runs and shows all the stuff I put into the $PATH variable in my .bashrc file (e.g. $JAVA_HOME, etc). But when I come in over NX, it shows a totally different value for the $PATH variable, despite the fact that if I launch a terminal window (still in NX), and type export $PATH, it shows up correctly. I assume this has to do with which files are getting loaded by the windowing system over NX, and that it's some other file. But I have no idea how to fix it. For the record, I also have a .profile file with the following in it: # if running bash if [ -n "$BASH_VERSION" ]; then # include .bashrc if it exists if [ -f "$HOME/.bashrc" ]; then . "$HOME/.bashrc" fi fi

    Read the article

  • Rethinking Oracle Optimizer Statistics for P6 Part 2

    - by Brian Diehl
    In the previous post (Part 1), I tried to draw some key insights about the relationship between P6 and Oracle Optimizer Statistics.  The first is that average cardinality has the greatest impact on query optimization and that the particular queries generated by P6 are more likely to use this average during calculations. The second is that these are statistics that are unlikely to change greatly over the life of the application. Ultimately, our goal is to get the best query optimization possible.  Or is it? Stability No application administrator wants to get the call at 9am that their application users cannot get there work done because everything is running slow. This is a possibility with a regularly scheduled nightly collection of statistics. It may not just be slow performance, but a complete loss of service because one or more queries are optimized poorly. Ideally, this should not be the case. The database optimizer should make better decisions with more up-to-date data. Better statistics may give incremental performance benefit. However, this benefit must be balanced against the potential cost of system down time.  It is stability that we ultimately desire and not absolute optimal performance. We do want the benefit from more accurate statistics and better query plans, but not at the risk of an unusable system. As a result, I've developed the following methodology around managing database statistics for the P6 database.  1. No Automatic Re-Gathering - The daily, weekly, or other interval of statistic gathering is unlikely to be beneficial. Quite the opposite. It is more likely to cause problems. 2. Smart Re-Gathering - The time to collect statistics is when things have changed significantly. For a new installation of P6, this is happening more often because the data is growing from a few rows to thousands and more. But for a mature system, the data is not changing significantly from week-to-week. There are times to collect statistics: New releases of the application Changes in the underlying hardware or software versions (ex. new Oracle RDBMS version) When additional user groups are added. The new groups may use the software in significantly different ways. After significant changes in the data. This may be monthly, quarterly or yearly.  3. Always Test - If you take away one thing from this post, it would be to always have a plan to test after changing statistics. In reality, statistics can be collected as often as you desire provided there are tests in place to verify that performance is the same or better. These might be automated tests or simply a manual script of application functions. 4. Have a Way Out - Never change the statistics without a way to return to the previous set. Think of the statistics as one part of the overall application code that also includes the source code--both application and RDBMS. It would be foolish to change to the new code without a way to get back to the previous version. In the final post, I will talk about the actual script I created for P6 PMDB and possible future direction for managing query performance. 

    Read the article

  • Debian 100% cpu every 30 minutes but not loggable?

    - by user654123
    I have a Debian 7 x64 machine running with Digital Ocean that has every 30 Minutes a 100% cpu usage for about 1 minute. A couple of days ago it stayed there for a couple of hours so the server finally crashed and I had to repair my Mysql databases. The server is a pure webserver running apache2 and Mysql. I tried tracing which processes use the cpu but with no luck. The script I used: #!/bin/sh while true; do ps -A -eo pcpu,pid,user,args | sort -k 1 -r | head -3 >> proclog.txt; echo "\n" >> proclog.txt; sleep 2; done I was monitoring htop as well while this was happening, but the top processess' cpu usage didn't add up to ~15% even though htop's cpu meter showed constant 100%. htop was configured to show all users' processess, user- and kernel-threads. Edit: By stopping Apache2 & Mysql prior to the expected 100% usage I can tell both are not responsible for it. The 100% usage occurred anyway. This is what the graph looked like the past hours:

    Read the article

  • Can I use Upstart to start a script which requires the user's X session?

    - by ledneb
    I wrote a script which greps through the output of synclient to determine whether a laptop's touchpad has miraculously turned itself off (Ubuntu seems to /love/ doing this recently) and, if so, turns it back on. The script is something like this: #!/bin/bash while true ; do if [ `synclient | grep -e"TouchpadOff[\s]*1" | wc -l` -ge 1 ] ; then synclient TouchpadOff=0 fi sleep(3) done (I don't have the laptop to hand right now but you get the point! I will update later when I'm at my laptop if that's incorrect) So I tried running this as an upstart script so my touchpad can heal itself without any interaction. But it seems synclient doesn't find the current user's X session when my script is upstart'ed. I tried running it by using something like su -c myscript.sh ledneb in my script stanza, but to no avail. Should I be looking in the direction of /etc/X11/xinit/xinitrc rather than upstart? Is there a proper way to have this script run in the context of the current (or even a hard-coded) user's x session?

    Read the article

  • What needs to be considered when setting up for Linux Development? [closed]

    - by user123586
    I want to set up a box for Linux development. I have a working linux install with the usual toolchain and an IDE. I'm looking for advice on how to approach structuring accounts and folders for development. As the Perl folks say "There's always more than one way to do it." Left to my own devices, I'll come up with several unproductive ways of doing it before figuring out what an experienced Linux programmer would think obvious. I'm not looking for instructions to follow for a specific set of tools or a specific software package. Instead, I'm looking for insight into what decisions need to be made and how to make them, with understanding of the advantages and disadvantages of each individual choice. These are some of the questions that come up: Where to put sources Where to put built object files and libraries where to install what to set in environment variables what compiler flags matter and how do you manage them across several types of builds what configuration entries to make in an IDE how to manage libraries to support multiple environments how to handle different build versions such as debug vs release, or cross platform builds If you are an experienced Linux developer, the answers to these questions may seem trivial and obvious. I'd like to learn to make decisions about these questions that result in as little manual configuration as possible, given some existing sources, a particular IDE, or no IDE at all, a paticular set of development libraries etc. At this point you're probably thinking: Can you be more specific? Sure. But remember that I'm trying to learn how to think about this stuff, not just follow a recipie for a specific set of results. Example: Setup a project that uses CMake for some of its components, autogen.sh followed by configure for others and just configure for a few more: debug builds without an IDE debug builds in NetBeans debug builds in Eclipse debug build in Visual Studio all of the above with release builds for Linux, Mac and Windows. So... **What are your thoughts on an approach that works for all four? Do you have any advice on what to read?**

    Read the article

  • environment variables generated by at command

    - by Jordan Arseno
    I'm inspecting /var/spool/cron/atjobs/a001cf01570e44 with cat, after running the at command from PHP using exec(). It looks like at has prepended the script with lots of APACHE environment variables. #!/bin/sh # atrun uid=33 gid=33 # mail www-data 0 umask 22 APACHE_RUN_DIR=/var/run/apache2; export APACHE_RUN_DIR APACHE_PID_FILE=/var/run/apache2.pid; export APACHE_PID_FILE PATH=/usr/local/bin:/usr/bin:/bin; export PATH APACHE_LOCK_DIR=/var/lock/apache2; export APACHE_LOCK_DIR LANG=C; export LANG APACHE_RUN_USER=www-data; export APACHE_RUN_USER APACHE_RUN_GROUP=www-data; export APACHE_RUN_GROUP APACHE_LOG_DIR=/var/log/apache2; export APACHE_LOG_DIR PWD=/home/jordanarseno/webroot/public_html/myapp; export PWD cd /home/jordanarseno/webroot/public\_html/myapp || { echo 'Execution directory inaccessible' >&2 exit 1 } curl -k http://localhost/myapp/crons/this_action/3 The last line is the only real command I sent along with at via stdin. What is the purpose of these variables? Where is this procedure stored?

    Read the article

  • Log Problem and bash script

    - by GvWorker
    Hello Guys, I have 11 Debian servers running on rackspace cloud hosting. All running VHCS2 for hosting management. 1 server is used for application and 10 are used for only smtp. My question is regarding smtp servers. Each server hosted 1 domain. My problem is when my client use smtp there's a log created in this directory /var/log/ but within 24 hours drives are full and server refuse all smtp connections. Then I deleted the logs and ran following command to check the disk space. df -h but it shows hdd still full and server is still refusing the smtp connections. Then I ran following command to see the truth du --max-depth=1 -h It shows the truth. The real disk space used. Then I rebooted the server and now server working fine. But after few hours same situation happened. Then I created the following script. #!/bin/sh rm -fr /var/log/* rm -fr /var/log/apache2/*.log rm -fr /var/log/apache2/*.log.* rm -fr /var/log/apache2/users/* rm -fr /var/log/apache2/backup/* reboot It worked for days but after that logs are again filling the hdd. Now I want the following solutions. If anybody can help me. When I delete files from server hdd will free up without rebooting Log should be in specific range. Like a specific size of file where old data overwrite with new data

    Read the article

  • upstart scripts: run a task after networking goes up

    - by The Journeyman geek
    I'm working on moving my current server setup to newer hardware, and migrating from ubuntu karmic koala to lucid lynx. Currently i'm using gw6c (compiled from the gogo6 website, as opposed to the version from the repositories) to get ipv6 access for my systems. On the karmic koala system, i used simple init.d script to get the ipv6 client started #! /bin/sh /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf I figured since this runs at any runlevel, it should translate to respawn console none start on startup stop on shutdown script exec /usr/local/gw6c/bin/gw6c -f /usr/local/gw6c/bin/gw6c.conf emit free6_ipv6_started end script this works fine started from initctrl, but it apparently fails to start when it boots. - its status being stop/waiting. It works fine (and respawns) when started otherwise.Any ideas on where i'm going wrong, and what would be the appropriate 'start on' arguement? EDIT: the exact error is 'init: gw6c main process (xxx) ended with status 8' followed by the process respawning , with xxx being a PID i suspect. I'm also half suspecting this is cause gw6c starts before networking does, and i need my eth0 up before gw6c is

    Read the article

  • Lost user account for Windows Vista

    - by annelie
    Hello, I'm trying to help a friend who's lost her user account in Vista. I know there's supposed to be a way you can boot the computer from the vista installation disc and create an admin account you can later login with, but her installation disc is in Australia and her laptop in London. Is there any other way to get in? Or would it be better to try and access just the harddrive? She's mainly concerned with getting all her data off it. As for how she lost the account, I'll let her explain in her words. :) My computer basically got some virus and now is up sh*t creek. it told me i had this cryptic thingy majiggy was missing and then this fake virus told me i needed to scan my computer. SO i tried to do malware thing but it kept shutting my computer down. ANYWAY...now its it will only open up with 'launch startup repair' and has got rid of my settings for logging in and wants me to be 'other user' which i have no password or username for'...so basically im stuffed. This is Windows Vista by the way. Thanks, Annelie

    Read the article

  • Fedora 17 transparent Ethernet Bridge not forwarding IP traffic

    - by mcdoomington
    I am running on Fedora 17 with the latest ebtables and have been trying to setup a transparent bridge - using the following script, I send a ping through the bridged host and only see the requests on the bridge (among other traffic from eth0), BUT, arps and arp replies are making it through. My host is setup - Client 192.168.1.10 <-- eth0 -- eth2 192.168.1.20 Ethernet script: #!/bin/sh brctl addbr br0; brctl stp br0 on; brctl addif br0 eth0; brctl addif br0 eth2; (ifdown eth0 1>/dev/null 2>&1;); (ifdown eth2 1>/dev/null 2>&1;); ifconfig eth0 0.0.0.0 up; ifconfig eth2 0.0.0.0 up; echo "1" > /proc/sys/net/ipv4/ip_forward; ebtables -P INPUT DROP ebtables -P FORWARD DROP ebtables -P OUTPUT DROP ebtables -A FORWARD -p ipv4 -j ACCEPT ebtables -A FORWARD -p arp -j ACCEPT Any assistance would be great!

    Read the article

  • Can't get PHP to work with my Nginx virtual host. Keeps returning "No input file specified"

    - by steve
    I'm trying to get phpmyadmin up and running on my server. Here's the nginx vhost for it: server { listen 80; server_name server.mydomain.net; location /phpmyadmin/ { fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME /usr/share/phpmyadmin$fastcgi_script_name; include /opt/nginx/conf/fastcgi_params; alias /usr/share/phpmyadmin/; } root /opt/nginx/html/; } Here's my fastcgi_params file fastcgi_param GATEWAY_INTERFACE CGI/1.1; fastcgi_param SERVER_SOFTWARE nginx; fastcgi_param QUERY_STRING $query_string; fastcgi_param REQUEST_METHOD $request_method; fastcgi_param CONTENT_TYPE $content_type; fastcgi_param CONTENT_LENGTH $content_length; fastcgi_param REQUEST_URI $request_uri; fastcgi_param DOCUMENT_URI $document_uri; fastcgi_param DOCUMENT_ROOT $document_root; fastcgi_param SERVER_PROTOCOL $server_protocol; fastcgi_param REMOTE_ADDR $remote_addr; fastcgi_param REMOTE_PORT $remote_port; fastcgi_param SERVER_ADDR $server_addr; fastcgi_param SERVER_PORT $server_port; fastcgi_param SERVER_NAME $server_name; I compiled lighthttpd so I could pull out spawn-fcgi. That is now sitting in /usr/local/bin and is accompanied by my php5-cgi launcher which looks like: #!/bin/sh /usr/local/bin/spawn-fcgi -a 127.0.0.1 -p 9000 -u www-data -C 2 -f /usr/bin/php5-cgi I run this and can see that it's successfully launched by doing a ps aux | grep php. However, whenever I try to open phpmyadmin, I get the error "No input file specified" What am I doing wrong? :/

    Read the article

  • TFS 2012 API Create Alert Subscriptions

    - by Bob Hardister
    Originally posted on: http://geekswithblogs.net/BobHardister/archive/2013/07/24/tfs-2012-api-create-alert-subscriptions.aspxThere were only a few post on this and I felt like really important information was left out: What the defaults are How to create the filter string Here’s the code to create the subscription. Get the Collection public TfsTeamProjectCollection GetCollection(string collectionUrl) { try { //connect to the TFS collection using the active user TfsTeamProjectCollection tpc = new TfsTeamProjectCollection(new Uri(collectionUrl)); tpc.EnsureAuthenticated(); return tpc; } catch (Exception) { return null; } } Use Impersonation Because my app is used to create “support tickets” as stories in TFS, I use impersonation so the subscription is setup for the “requester.”  That way I can take all the defaults for the subscription delivery preferences. public TfsTeamProjectCollection GetCollectionImpersonation(string collectionUrl, string impersonatingUserAccount) { // see: http://blogs.msdn.com/b/taylaf/archive/2009/12/04/introducing-tfs-impersonation.aspx try { TfsTeamProjectCollection tpc = GetCollection(collectionUrl); if (!(tpc == null)) { //get the TFS identity management service (v2 is 2012 only) IIdentityManagementService2 ims = tpc.GetService<IIdentityManagementService2>(); //look up the user we want to impersonate TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, impersonatingUserAccount, MembershipQuery.None, ReadIdentityOptions.None); //create a new connection using the impersonated user account //note: do not ensure authentication because the impersonated user may not have //windows authentication at execution if (!(identity == null)) { TfsTeamProjectCollection itpc = new TfsTeamProjectCollection(tpc.Uri, identity.Descriptor); return itpc; } else { //the user account is not found return null; } } else { return null; } } catch (Exception) { return null; } } Create the Alert Subscription public bool SetWiAlert(string collectionUrl, string projectName, int wiId, string emailAddress, string userAccount) { bool setSuccessful = false; try { //use impersonation so the event service creating the subscription will default to //the correct account: otherwise domain ambiguity could be a problem TfsTeamProjectCollection itpc = GetCollectionImpersonation(collectionUrl, userAccount); if (!(itpc == null)) { IEventService es = itpc.GetService(typeof(IEventService)) as IEventService; DeliveryPreference deliveryPreference = new DeliveryPreference(); //deliveryPreference.Address = emailAddress; deliveryPreference.Schedule = DeliverySchedule.Immediate; deliveryPreference.Type = DeliveryType.EmailHtml; //the following line does not work for two reasons: //string filter = string.Format("\"ID\" = '{0}' AND \"Authorized As\" <> '[Me]'", wiId); //1. the create fails because there is a space between Authorized As //2. the explicit query criteria are all incorrect anyway // see uncommented line for what does work: you have to create the subscription mannually // and then get it to view what the filter string needs to be (see following commented code) //this works string filter = string.Format("\"CoreFields/IntegerFields/Field[Name='ID']/NewValue\" = '12175'" + " AND \"CoreFields/StringFields/Field[Name='Authorized As']/NewValue\"" + " <> '@@MyDisplayName@@'", projectName, wiId); string eventName = string.Format("<PT N=\"ALM Ticket for Work Item {0}\"/>", wiId); es.SubscribeEvent("WorkItemChangedEvent", filter, deliveryPreference, eventName); ////use this code to get existing subscriptions: you can look at manually created ////subscriptions to see what the filter string needs to be //IIdentityManagementService2 ims = itpc.GetService<IIdentityManagementService2>(); //TeamFoundationIdentity identity = ims.ReadIdentity(IdentitySearchFactor.AccountName, // userAccount, // MembershipQuery.None, // ReadIdentityOptions.None); //var existingsubscriptions = es.GetEventSubscriptions(identity.Descriptor); setSuccessful = true; return setSuccessful; } else { return setSuccessful; } } catch (Exception) { return setSuccessful; } }

    Read the article

  • How can I change the color of the text in my iFrame? [closed]

    - by VinylScratch
    I have code here: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> <html> <head> <title>Frag United Banlist</title> </head> <body> <h1>Tekkit Banlist</h1> <?php // change these things $server = "server-host"; $dbuser = "correct-user"; $dbpass = "correct-password"; $dbname = "correct-database"; mysql_connect($server, $dbuser, $dbpass); mysql_select_db($dbname); $result = mysql_query("SELECT * FROM banlist ORDER BY id DESC"); //This will display the most recent by id edit this query how you see fit. Limit, Order, ect. echo "<table width=100% border=1 cellpadding=3 cellspacing=0>"; echo "<tr style=\"font-weight:bold\"> <td>ID</td> <td>User</td> <td>Reason</td> <td>Admin/Mod</td> <td>Time</td> <td>Ban Length</td> </tr>"; while($row = mysql_fetch_assoc($result)){ if($col == "#eeeeee"){ $col = "#ffffff"; }else{ $col = "#eeeeee"; } echo "<tr bgcolor=$col>"; echo "<td>".$row['id']."</td>"; echo "<td>".$row['user']."</td>"; echo "<td>".$row['reason']."</td>"; echo "<td>".$row['admin']."</td>"; //Convert Epoch Time to Standard format $datetime = date("F j, Y, g:i a", $row['time']); echo "<td>$datetime</td>"; $dateconvert = date("F j, Y, g:i a", $row['length']); if($row['length'] == "0"){ echo "<td>None</td>"; }else{ echo "<td>$dateconvert</td>"; } echo "<td>".$row['id']."</td>"; echo "</tr>"; } echo"</table>" ?> </div> </body></html> And I am trying to make it so that when I put it in this iframe: <iframe src="http://bans.fragunited.net/" width="100%" length="100%"><p>Your browser does not support iframes.</p></iframe> But if you go to this page, fragunited.net/bans, (not bans.fragunited.net) the text is black and I want it to be white so you can actually see it. Sorry for the large amount of code, however I don't know where you have to put the code to change the color.

    Read the article

< Previous Page | 867 868 869 870 871 872 873 874 875 876 877 878  | Next Page >