Search Results

Search found 1108 results on 45 pages for 'stats'.

Page 8/45 | < Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >

  • Formula for three competing heroes, each has one they can beat and one they're beaten by

    - by Georgiadis Abraam
    I am trying to design a game for a project I have, The main idea is: 3 Types of heroes 3 Stats per hero There are no levels involved so the differences must be located on stats. Fight logic - The logic of fight is that type1hero has good chances winning type2hero, type2hero has good chances type3hero and type3hero has good chances winning type1hero. For over a week I am trying to find a stats based formula that will allow me to fix this but I can't, I was meddling with numbers yesterday and it was decent but I can't extract the formula out of it. Could you please guide me or give me hints on how should I start creating formulas on a Non lvl game that fulfills the fight logic?

    Read the article

  • Why does Google Analytics show false referrals?

    - by Peter Merrill
    Ever since Google revamped their Analytics interface I've been noticing a weird "bug" while viewing the "Real-Time" overview area. From this area I can obviously see live stats of visitors to my website but when I visit my website by opening a new tab (Chrome) and manually visit website the real time stats sometimes look like the image linked below. http://i.stack.imgur.com/mfniY.png Is there any reason why Google is saying that I was referred by Stack Overflow when I'm visiting my website from a new tab? Could this be something do to with how I installed the analytics on my site or could this be an issue with browser cookies? Have anyone else noticed this? I am mainly concerned about this because in the standard reporting area of my Analytics panel my referral stats are getting thrown off every time I visit my own website.

    Read the article

  • Nginx + Haproxy + Thin + Rails - 503 Service Unavailable -

    - by Luca G. Soave
    I don't know how troubleshoot this. I get "503 Service Unavailable" http error for all "nginx upstreams" proxy passing calls to haproxy fast_thin and slow_thin ( server 127.0.0.1:3100 and server 127.0.0.1:3200 ), which loadbalance on 6 Thin servers ( 127.0.0.1:3000 .. 3005 ). Static files like /blog are currently fine. The falldown is: nginx on port 80 - haproxy on 3100 and 3200 - thin on 3000 .. 3005 and then Rails. Here it is /etc/nginx/nginx.conf : user nginx; worker_processes 2; pid /var/run/nginx.pid; events { worker_connections 1024; } http { include /etc/nginx/mime.types; default_type application/octet-stream; log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; sendfile on; tcp_nopush on; keepalive_timeout 65; tcp_nodelay on; include /etc/nginx/conf.d/*.conf; } then /etc/nginx/conf.d/default.conf upstream fast_thin { server 127.0.0.1:3100; } upstream slow_thin { server 127.0.0.1:3200; } server { listen 80; server_name www.gitwatcher.com; rewrite ^/(.*) http://gitwatcher.com/$1 permanent; } server { listen 80; server_name gitwatcher.com; access_log /var/www/gitwatcher/log/access.log; error_log /var/www/gitwatcher/log/error.log; root /var/www/gitwatcher/public; # index index.html; location /about { proxy_pass http://fast_thin; break; } location /trends { proxy_pass http://slow_thin; break; } location /categories { proxy_pass http://slow_thin; break; } location /signout { proxy_pass http://slow_thin; break; } location /auth/github { proxy_pass http://slow_thin; break; } location / { proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $http_host; proxy_redirect off; if (-f $request_filename/index.html) { rewrite (.*) $1/index.html break; } if (-f $request_filename.html) { rewrite (.*) $1.html break; } if (!-f $request_filename) { proxy_pass http://slow_thin; break; } } } then haproxy config file /etc/haproxy/haproxy.cfg : global log 127.0.0.1 local0 log 127.0.0.1 local1 notice #log loghost local0 info maxconn 4096 #chroot /usr/share/haproxy user haproxy group haproxy daemon #debug #quiet nbproc 1 # number of processing cores defaults log global retries 3 maxconn 2000 contimeout 5000 mode http clitimeout 60000 # maximum inactivity time on the client side srvtimeout 30000 # maximum inactivity time on the server side timeout connect 4000 # maximum time to wait for a connection attempt to a server to succeed option httplog option dontlognull option redispatch option httpclose # disable keepalive (HAProxy does not yet support the HTTP keep-alive mode) option abortonclose # enable early dropping of aborted requests from pending queue option httpchk # enable HTTP protocol to check on servers health option forwardfor # enable insert of X-Forwarded-For headers balance roundrobin # each server is used in turns, according to assigned weight stats enable # enable web-stats at /haproxy?stats stats auth haproxy:pr0xystats # force HTTP Auth to view stats stats refresh 5s # refresh rate of stats page listen rails_proxy 127.0.0.1:3100 # - equal weights on all servers # - maxconn will queue requests at HAProxy if limit is reached # - minconn dynamically scales the connection concurrency (bound my maxconn) depending on size of HAProxy queue # - check health every 20000 microseconds server web1 127.0.0.1:3000 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3001 weight 1 minconn 3 maxconn 6 check inter 20000 server web1 127.0.0.1:3002 weight 1 minconn 3 maxconn 6 check inter 20000 listen slow_proxy 127.0.0.1:3200 # cluster for slow requests, lower the queues, check less frequently server slow1 127.0.0.1:3003 weight 1 minconn 1 maxconn 3 check inter 40000 server slow2 127.0.0.1:3004 weight 1 minconn 1 maxconn 3 check inter 40000 server slow3 127.0.0.1:3005 weight 1 minconn 1 maxconn 3 check inter 40000 and the Thin config file /etc/thin/gitwatcher.yml : --- chdir: /var/www/gitwatcher environment: production address: 0.0.0.0 port: 3000 timeout: 30 log: log/thin.log pid: tmp/pids/thin.pid max_conns: 1024 max_persistent_conns: 100 require: [] wait: 30 servers: 6 daemonize: true if I look into open listen ports, I got the following : root@fullness:/var/www/gitwatcher# lsof | grep TCP | egrep "nginx|haproxy|thin" nginx 834 root 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 835 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) nginx 837 nginx 8u IPv4 921 0t0 TCP *:http (LISTEN) haproxy 1908 haproxy 4u IPv4 11699 0t0 TCP localhost:3100 (LISTEN) haproxy 1908 haproxy 6u IPv4 11701 0t0 TCP localhost:3200 (LISTEN) root@fullness:/var/www/gitwatcher# iptables -L get me the following : Chain INPUT (policy DROP) target prot opt source destination ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED ACCEPT tcp -- anywhere anywhere tcp dpt:22222 ACCEPT tcp -- anywhere anywhere tcp dpt:http ACCEPT tcp -- anywhere anywhere tcp dpt:https ACCEPT all -- anywhere anywhere DROP all -- anywhere anywhere Chain FORWARD (policy ACCEPT) target prot opt source destination Chain OUTPUT (policy ACCEPT) target prot opt source destination ACCEPT all -- anywhere anywhere Any help ?

    Read the article

  • Xml comparison in Python

    - by Gregg Lind
    Building on another SO question, how can one check whether two well-formed XML snippets are semantically equal. All I need is "equal" or not, since I'm using this for unit tests. In the system I want, these would be equal (note the order of 'start' and 'end'): <?xml version='1.0' encoding='utf-8' standalone='yes'?> <Stats start="1275955200" end="1276041599"> </Stats> # Reodered start and end <?xml version='1.0' encoding='utf-8' standalone='yes'?> <Stats end="1276041599" start="1275955200" > </Stats> I have lmxl and other tools at my disposal, and a simple function that only allows reordering of attributes would work fine as well!

    Read the article

  • null reference expction in the code

    - by LifeH2O
    I am getting NullReferenceException error on "_attr.Append(xmlNode.Attributes["name"]);". namespace SMAS { class Profiles { private XmlTextReader _profReader; private XmlDocument _profDoc; private const string Url = "http://localhost/teamprofiles.xml"; private const string XPath = "/teams/team-profile"; public XmlNodeList Teams{ get; private set; } private XmlAttributeCollection _attr; public ArrayList Team { get; private set; } public void GetTeams() { _profReader = new XmlTextReader(Url); _profDoc = new XmlDocument(); _profDoc.Load(_profReader); Teams = _profDoc.SelectNodes(XPath); foreach (XmlNode xmlNode in Teams) { _attr.Append(xmlNode.Attributes["name"]); } } } } the teamprofiles.xml file looks like <teams> <team-profile name="Australia"> <stats type="Test"> <span>1877-2010</span> <matches>721</matches> <won>339</won> <lost>186</lost> <tied>2</tied> <draw>194</draw> <percentage>47.01</percentage> </stats> <stats type="Twenty20"> <span>2005-2010</span> <matches>32</matches> <won>18</won> <lost>12</lost> <tied>1</tied> <draw>1</draw> <percentage>59.67</percentage> </stats> </team-profile> <team-profile name="Bangladesh"> <stats type="Test"> <span>2000-2010</span> <matches>66</matches> <won>3</won> <lost>57</lost> <tied>0</tied> <draw>6</draw> <percentage>4.54</percentage> </stats> </team-profile> </teams> I am trying to extract names of all teams in an ArrayList. Then i'll extract all stats of all teams and display them in my application. Can you please help me about that null reference exception?

    Read the article

  • is Boost Library's weighted median broken?

    - by user624188
    I confess that I am no expert in C++. I am looking for a fast way to compute weighted median, which Boost seemed to have. But it seems I am not able to make it work. #include <iostream> #include <boost/accumulators/accumulators.hpp> #include <boost/accumulators/statistics/stats.hpp> #include <boost/accumulators/statistics/median.hpp> #include <boost/accumulators/statistics/weighted_median.hpp> using namespace boost::accumulators; int main() { // Define an accumulator set accumulator_set<double, stats<tag::median > > acc1; accumulator_set<double, stats<tag::median >, float> acc2; // push in some data ... acc1(0.1); acc1(0.2); acc1(0.3); acc1(0.4); acc1(0.5); acc1(0.6); acc2(0.1, weight=0.); acc2(0.2, weight=0.); acc2(0.3, weight=0.); acc2(0.4, weight=1.); acc2(0.5, weight=1.); acc2(0.6, weight=1.); // Display the results ... std::cout << " Median: " << median(acc1) << std::endl; std::cout << "Weighted Median: " << median(acc2) << std::endl; return 0; } produces the following output, which is clearly wrong. Median: 0.3 Weighted Median: 0.3 Am I doing something wrong? Any help will be greatly appreciated. * however, the weighted sum works correctly * @glowcoder: The weighted sum works perfectly fine like this. #include <iostream> #include <boost/accumulators/accumulators.hpp> #include <boost/accumulators/statistics/stats.hpp> #include <boost/accumulators/statistics/sum.hpp> #include <boost/accumulators/statistics/weighted_sum.hpp> using namespace boost::accumulators; int main() { // Define an accumulator set accumulator_set<double, stats<tag::sum > > acc1; accumulator_set<double, stats<tag::sum >, float> acc2; // accumulator_set<double, stats<tag::median >, float> acc2; // push in some data ... acc1(0.1); acc1(0.2); acc1(0.3); acc1(0.4); acc1(0.5); acc1(0.6); acc2(0.1, weight=0.); acc2(0.2, weight=0.); acc2(0.3, weight=0.); acc2(0.4, weight=1.); acc2(0.5, weight=1.); acc2(0.6, weight=1.); // Display the results ... std::cout << " Median: " << sum(acc1) << std::endl; std::cout << "Weighted Median: " << sum(acc2) << std::endl; return 0; } and the result is Sum: 2.1 Weighted Sum: 1.5

    Read the article

  • can this problem be solved with a single SQL query?

    - by PierrOz
    I have the two following tables (with some sample datas) LOGS: ID | SETID | DATE ======================== 1 | 1 | 2010-02-25 2 | 2 | 2010-02-25 3 | 1 | 2010-02-26 4 | 2 | 2010-02-26 5 | 1 | 2010-02-27 6 | 2 | 2010-02-27 7 | 1 | 2010-02-28 8 | 2 | 2010-02-28 9 | 1 | 2010-03-01 STATS: ID | OBJECTID | FREQUENCY | STARTID | ENDID ============================================= 1 | 1 | 0.5 | 1 | 5 2 | 2 | 0.6 | 1 | 5 3 | 3 | 0.02 | 1 | 5 4 | 4 | 0.6 | 2 | 6 5 | 5 | 0.6 | 2 | 6 6 | 6 | 0.4 | 2 | 6 7 | 1 | 0.35 | 3 | 7 8 | 2 | 0.6 | 3 | 7 9 | 3 | 0.03 | 3 | 7 10 | 4 | 0.6 | 4 | 8 11 | 5 | 0.6 | 4 | 8 7 | 1 | 0.45 | 5 | 9 8 | 2 | 0.6 | 5 | 9 9 | 3 | 0.02 | 5 | 9 Every day new logs are analyzed on different sets of objects and stored in table LOGS. Among other processes, some statistics are computed on the objects contained into these sets and the result are stored in table STATS. These statistic are computed through several logs (identified by the STARTID and ENDID columns). So, what could be the SQL query that would give me the latest computed stats for all the objects with the corresponding log dates. In the given example, the result rows would be: OBJECTID | SETID | FREQUENCY | STARTDATE | ENDDATE ====================================================== 1 | 1 | 0.45 | 2010-02-27 | 2010-03-01 2 | 1 | 0.6 | 2010-02-27 | 2010-03-01 3 | 1 | 0.02 | 2010-02-27 | 2010-03-01 4 | 2 | 0.6 | 2010-02-26 | 2010-02-28 5 | 2 | 0.6 | 2010-02-26 | 2010-02-28 So, the most recent stats for set 1 are computed with logs from feb 27 to march 1 whereas stats for set 2 are computed from feb 26 to feb 28. object 6 is not in the results rows as there is no stat on it within the last period of time. Last thing, I use MySQL. Any Idea ?

    Read the article

  • null reference exception in the code

    - by LifeH2O
    I am getting NullReferenceException error on "_attr.Append(xmlNode.Attributes["name"]);". namespace SMAS { class Profiles { private XmlTextReader _profReader; private XmlDocument _profDoc; private const string Url = "http://localhost/teamprofiles.xml"; private const string XPath = "/teams/team-profile"; public XmlNodeList Teams{ get; private set; } private XmlAttributeCollection _attr; public ArrayList Team { get; private set; } public void GetTeams() { _profReader = new XmlTextReader(Url); _profDoc = new XmlDocument(); _profDoc.Load(_profReader); Teams = _profDoc.SelectNodes(XPath); foreach (XmlNode xmlNode in Teams) { _attr.Append(xmlNode.Attributes["name"]); } } } } the teamprofiles.xml file looks like <teams> <team-profile name="Australia"> <stats type="Test"> <span>1877-2010</span> <matches>721</matches> <won>339</won> <lost>186</lost> <tied>2</tied> <draw>194</draw> <percentage>47.01</percentage> </stats> <stats type="Twenty20"> <span>2005-2010</span> <matches>32</matches> <won>18</won> <lost>12</lost> <tied>1</tied> <draw>1</draw> <percentage>59.67</percentage> </stats> </team-profile> <team-profile name="Bangladesh"> <stats type="Test"> <span>2000-2010</span> <matches>66</matches> <won>3</won> <lost>57</lost> <tied>0</tied> <draw>6</draw> <percentage>4.54</percentage> </stats> </team-profile> </teams> I am trying to extract names of all teams in an ArrayList. Then i'll extract all stats of all teams and display them in my application. Can you please help me about that null reference exception?

    Read the article

  • SQL Server Scripts I Use

    - by Bill Graziano
    When I get to a new client I usually find myself using the same set of scripts for maintenance and troubleshooting.  These are all drop in solutions for various maintenance issues. Reindexing.  I use Michelle Ufford’s (SQLFool) re-indexing script.  I like that it has a throttle and only re-indexes when needed.  She also has a variety of other interesting scripts on her blog too. Server Activity.  Adam Machanic is up to version 10 of sp_WhoIsActive.  It’s a great replacement for the sp_who* stored procedures and does so much more.  If a server is acting funny this is one of the first tools I use. Backups.  Tara Kizer has a great little T-SQL script for SQL Server backups.  Wait Stats.  Paul Randal has a great script to display wait stats.  The biggest benefit for me is that his script filters out at least three dozen wait stats that I just don’t care about (for example LAZYWRITER_SLEEP). Update Statistics.  I didn’t find anything I liked so I wrote a simple script to update stats myself.  The big need for me was that it had to run inside a time window and update the oldest statistics first.  Is there a better one? Diagnostic Queries.  Glenn Berry has a huge collection of DMV queries available.  He also just highlighted five of them including two I really like dealing with unused indexes and suggested indexes. Single Use Query Plans.  Kim Tripp has a script that counts the number of single-use query plans.  This should guide you in whether to enable the Optimize for Adhoc Workloads option in SQL Server 2008. Granting Permissions to Developers.  This is one of those scripts I didn’t even know I needed until I needed it.  Kendra Little wrote it to grant a login read-only permission to all the databases.  It also grants view server state and a few other handy permissions.   What else do you use?  What should I add to my list?

    Read the article

  • Copy website content from WebsiteA.com/file.html to WebsiteB/file.com every time interval

    - by Jimbo Mombasa
    I want to copy a website from http://stats.pingdom.com/file... to http://mywebsite.com/file every 10 min. Then with purple-include I want to do a transclusion and display it on http://mywebsite.com/page.html So the task is download http://stats.pingdom.com/file to http://mywebsite.com/file I figured out the transclusion part but I do not know how to copy a wabpage from A to B. Are there any script for this or how can I do this?

    Read the article

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • Using PHP version 5.2 or 5.3 for commercial products?

    - by Ash
    I'm doing research on what version of PHP to use when creating commercial scripts that will be sold to the public. Although the available stats aren't great, PHP 5.3 shows a 18.5% adoption rate. I'd like to use Symfony to create these scripts and it requires 5.3.2 which shows an even lower adoption rate (roughly 13% of that 18.5% use less than 5.3.2). Would I be risking much by jumping straight to PHP 5.3.2+ or should I ignore the stats and plough ahead?

    Read the article

  • Using PHP version 5.2 or 5.3 for end-user commercial products?

    - by Ash
    I'm doing research on what version of PHP to use when creating commercial scripts that will be sold to end users. Although the available stats aren't great, PHP 5.3 shows a 18.5% adoption rate. I'd like to use Symfony to create these scripts and it requires 5.3.2 which shows an even lower adoption rate (roughly 13% of that 18.5% use less than 5.3.2). Would I be risking much by jumping straight to PHP 5.3.2+ or should I ignore the stats and plough ahead?

    Read the article

  • Google Analytics - include filter not working

    - by gerl
    I just added an include filter this morning in my domain (test.org). I have: Custom Filter Include Request URI ^/test-a/46212$|^/test-a/46212|^/test-a/46315 Now after I go to Content Site Content All Pages, I see stats for other pages that I didn't include in my filter. For example I see /somethingelse. I only want to see stats for /test-a/46212 and whatever else in my filter. Please let me know what I'm doing wrong.

    Read the article

  • Haproxy and CNAME

    - by user123354
    I want to create a simple load balancer for the two servers. The problem is with CNAME records, I think. Let's say I have two the same applications on AppFog.com. app1.aws.af.cm and app2.aws.af.cm Here is my haproxy.cfg file: global maxconn 2000 daemon defaults mode http clitimeout 60000 srvtimeout 30000 contimeout 4000 option httpclose listen http_proxy bind [myip]:80 mode http stats enable stats auth user:passwd stats uri /stats balance source option httpchk option forwardfor server host01 app1.aws.af.cm:80 maxconn 300 check server host02 app2.aws.af.cm:80 maxconn 300 check But this only resolving IP for domain app1.aws.af.cm and app2.aws.af.cm, which obviously doesn't work if I open this IP in web browser. The problem is that AppFog doesn't have public IP for application (same as OpenShift). How to do that Haproxy to perform a proper connection between Load Balancer and this two servers? Example: This is real app - http://freechat.eu01.aws.af.cm Haproxy only resolves IP for this domain which is 46.51.204.8:80 Of course this IP will not show my application, only an error page. Sorry for my poor English.

    Read the article

  • Correct MySQL username/password, but getting Access Denied error when run from script

    - by Nick
    I'm currently trying to run the following command from within a shell script. /usr/bin/mysql -u username -ppassword -h localhost database It works perfectly fine when executed manually, and not from within a script. When I try to execute a script that contains that command, I get the following error: ERROR 1045 (28000) at line 3: Access denied for user 'username'@'localhost' (using password: YES) I literally copied and pasted the working command into the script. Why the error? As a sidenote: the ultimate intent is to run the script with cron. EDIT: Here is a stripped down version of my script that I'm trying to run. You can ignore most of it up until the point where it connects to MySQL around line 19. #!/bin/sh #Run download script to download product data cd /home/dir/Scripts/Linux /bin/sh script1.sh #Run import script to import product data to MySQL cd /home/dir/Mysql /bin/sh script2.sh #Download inventory stats spreadsheet and rename it cd /home/dir /usr/bin/wget http://www.url.com/file1.txt mv file1.txt sheet1.csv #Remove existing export spreadsheet rm /tmp/sheet2.csv #Run MySQL queries in "here document" format /usr/bin/mysql -u username -ppassword -h localhost database << EOF --Drop old inventory stats table truncate table table_name1; --Load new inventory stats into table Load data local infile '/home/dir/sheet1.csv' into table table_name1 fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; --MySQL queries to combine product data and inventory stats here --Export combined data in spreadsheet format group by p.value into outfile '/tmp/sheet2.csv' fields terminated by ',' optionally enclosed by '"' lines terminated by '\r\n'; EOF EDIT 2: After some more testing, the issue is with the << EOF that is at the end of the command. This is there for the "here document". When removed, the command works fine. The problem is that I need << EOF there so that the MySQL queries will run.

    Read the article

  • sys.dm_exec_query_stats interaction with recompilation

    - by Sam Saffron
    We use sys.dm_exec_query_stats to track down slow queries and queries that are IO offenders. This works great, we get a lot of very insightful stats. It is clear this is not as accurate as running a profiler trace, as you have no idea when SQL Server will decide to chuck out a an execution plan. We have quite a few queries where the wrong execution plan is cached. For example queries like the following: SELECT TOP 30 a.Id FROM Posts a JOIN Posts q ON q.Id = a.ParentId JOIN PostTags pt ON q.Id = pt.PostId WHERE a.PostTypeId = 2 AND a.DeletionDate IS NULL AND a.CommunityOwnedDate IS NULL AND a.CreationDate @date AND LEN(a.Body) 300 AND pt.Tag = @tag AND a.Score 0 ORDER BY a.Score DESC The problem is that the ideal plan really depends on the date selected (screenshot of ideal plan): However if the wrong plan is cached, it totally chokes when the date range is big: (notice the big fat lines) To overcome this we were recommended to use either OPTION (OPTIMIZE FOR UNKNOWN) or OPTION (RECOMPILE) OPTIMIZE FOR UNKNOWN results in a slightly better plan, which is far from optimal. Executions are tracked in sys.dm_exec_query_stats. RECOMPILE results in the best plan being chosen, however no execution counts and stats are tracked in sys.dm_exec_query_stats. Is there another DMV we could use to track stats on queries with OPTION (RECOMPILE)? Is this behavior by-design? Is there another way we can for recompilation while keeping stats tracked in sys.dm_exec_query_stats? Note: the framework will always execute parameterized queries using sp_executesql

    Read the article

  • HAProxy, health checking multiple servers with different host names

    - by Marco Bettiolo
    I need to load balance between multiple running servers with different host names. I cannot set-up the same virtual host on each one. Is it possible to have only one listen configuration with multiple server and make the Health Checks apply the http-send-name-header Host directive? I am using HAProxy 1.5. I came up with this working haproxy.cfg, as you can see, I had to set a different hostname for each health check as the health check ignores the http-send-name-header Host. I would have preferred to use variables or other methods and keep things more concise. global log 127.0.0.1 local0 notice maxconn 2000 user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch timeout connect 5000 timeout client 10000 timeout server 10000 stats enable stats uri /haproxy?stats stats refresh 5s balance roundrobin option httpclose listen inbound :80 option httpchk HEAD / HTTP/1.1\r\n server instance1 127.0.0.101 check inter 3000 fall 1 rise 1 server instance2 127.0.0.102 check inter 3000 fall 1 rise 1 listen instance1 127.0.0.101:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.example.com server www.example.com www.example.com:80 check inter 5000 fall 3 rise 2 listen instance2 127.0.0.102:80 option forwardfor http-send-name-header Host option httpchk HEAD / HTTP/1.1\r\nHost:\ www.bing.com server www.bing.com www.bing.com:80 check inter 5000 fall 3 rise 2

    Read the article

  • Pass string between two threads in java

    - by geeta
    I have to search a string in a file and write the matched lines to another file. I have a thread to read a file and a thread to write a file. I want to send the stringBuffer from read thread to write thread. Please help me to pass this. I amm getting null value passed. write thread: class OutputThread extends Thread{ /****************** Writes the line with search string to the output file *************/ Thread runner1,runner; File Out_File; public OutputThread() { } public OutputThread(Thread runner,File Out_File) { runner1 = new Thread(this,"writeThread"); // (1) Create a new thread. this.Out_File=Out_File; this.runner=runner; runner1.start(); // (2) Start the thread. } public void run() { try{ BufferedWriter bufferedWriter=new BufferedWriter(new FileWriter(Out_File,true)); System.out.println("inside write"); synchronized(runner){ System.out.println("inside wait"); runner.wait(); } System.out.println("outside wait"); // bufferedWriter.write(line.toString()); Buffer Buf = new Buffer(); bufferedWriter.write(Buf.buffers); System.out.println(Buf.buffers); bufferedWriter.flush(); } catch(Exception e){ System.out.println(e); e.printStackTrace(); } } } Read Thraed: class FileThread extends Thread{ Thread runner; File dir; String search_string,stats; File Out_File,final_output; StringBuffer sb = new StringBuffer(); public FileThread() { } public FileThread(CountDownLatch latch,String threadName,File dir,String search_string,File Out_File,File final_output,String stats) { runner = new Thread(this, threadName); // (1) Create a new thread. this.dir=dir; this.search_string=search_string; this.Out_File=Out_File; this.stats=stats; this.final_output=final_output; this.latch=latch; runner.start(); // (2) Start the thread. } public void run() { try{ Enumeration entries; ZipFile zipFile; String source_file_name = dir.toString(); File Source_file = dir; String extension; OutputThread out = new OutputThread(runner,Out_File); int dotPos = source_file_name.lastIndexOf("."); extension = source_file_name.substring(dotPos+1); if(extension.equals("zip")) { zipFile = new ZipFile(source_file_name); entries = zipFile.entries(); while(entries.hasMoreElements()) { ZipEntry entry = (ZipEntry)entries.nextElement(); if(entry.isDirectory()) { (new File(entry.getName())).mkdir(); continue; } searchString(runner,entry.getName(),new BufferedInputStream(zipFile.getInputStream(entry)),Out_File,final_output,search_string,stats); } zipFile.close(); } else { searchString(runner,Source_file.toString(),new BufferedInputStream(new FileInputStream(Source_file)),Out_File,final_output,search_string,stats); } } catch(Exception e){ System.out.println(e); e.printStackTrace(); } } /********* Reads the Input Files and Searches for the String ******************************/ public void searchString(Thread runner,String Source_File,BufferedInputStream in,File output_file,File final_output,String search,String stats) { int count = 0; int countw = 0; int countl=0; String s; String[] str; String newLine = System.getProperty("line.separator"); try { BufferedReader br2 = new BufferedReader(new InputStreamReader(in)); //OutputFile outfile = new OutputFile(); BufferedWriter bufferedWriter = new BufferedWriter(new FileWriter(output_file,true)); Buffer Buf = new Buffer(); //StringBuffer sb = new StringBuffer(); StringBuffer sb1 = new StringBuffer(); while((s = br2.readLine()) != null ) { str = s.split(search); count = str.length-1; countw += count; if(s.contains(search)){ countl++; sb.append(s); sb.append(newLine); } if(countl%100==0) { System.out.println("inside count"); Buf.setBuffers(sb.toString()); sb.delete(0,sb.length()); System.out.println("outside notify"); synchronized(runner) { runner.notify(); } //outfile.WriteFile(sb,bufferedWriter); //sb.delete(0,sb.length()); } } } synchronized(runner) { runner.notify(); } br2.close(); in.close(); if(countw == 0) { System.out.println("Input File : "+Source_File ); System.out.println("Word not found"); System.exit(0); } else { System.out.println("Input File : "+Source_File ); System.out.println("Matched word count : "+countw ); System.out.println("Lines with Search String : "+countl); System.out.println("Output File : "+output_file.toString()); System.out.println(); } } catch(Exception e){ System.out.println(e); e.printStackTrace(); } } }

    Read the article

  • Solr performance (tomcat) - High load

    - by Ward Loockx
    I'm relatively new to solr. I have a production site running on a VPS, but now I'm having serious load issues. I don't know where to start in order to get the load down... VPS specs (linode.com 512) 512 MB RAM 4 CPU (1x priority) Looks like my solr server (tomcat) is using a lot of CPU power You can find my solrconfig.xml on http://pastebin.com/qdfi8Med and my schema.xml on http://pastebin.com/rRusDP8b I've tried to increaese the cache size, but this didn't do anything on the load. You can see the stats page below. EDIT - Because the screenshot was unclear, I took smaller screenshots if what (I think) is important. Dismax query handler stats Caches stats Thanks for the help!

    Read the article

  • choose server backend to some URL with haproxy

    - by shingara
    To some URL I don't want use some server. So use other. Actually I have this haproxy configuration. global daemon log 127.0.0.1 local0 #log loghost local0 info maxconn 4096 #debug #quiet user haproxy group haproxy defaults log global mode http option httplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 balance roundrobin stats enable stats refresh 5s stats auth admin:123abc789xyz # Set up application listeners here. listen application 0.0.0.0:10000 server localhost 127.0.0.1:10100 weight 1 maxconn 5 check server externe 127.0.0.1:10101 weight 1 maxconn 5 check By example I want all url to /users be served only by server localhost, not by externe.

    Read the article

  • New line (in the code) after <li> element breaking layout

    - by BT643
    Weirdly, I've never come across this issue before, but I've just started making a site and the top navigation isn't playing nicely. I want a small amount of white space between each menu item, but when I have new lines between my <li> elements and my <a> elements in my IDE (Netbeans), the white space disappears, yet it looks fine if I have <li><a></a></li> all on the same line. I was always under the impression html ignored white space in the code. I've checked for any weird characters causing problems in other text editors and can't find anything. Here's the code... Like this the menu looks correct but code looks ugly (I know it looks fine when it's this simple, but I'm going be adding more complexity in which makes it look awful all on one line): <ul id="menu"> <li><a href="#">About</a></li> <li class="active"><a href="<?php echo site_url("tracklist"); ?>">Track List</a></li> <li><a href="<?php echo site_url("stats"); ?>">Stats</a></li> <li><a href="#">Stats</a></li> </ul> Like this the menu looks wrong but code looks fine: <ul id="menu"> <li> <a href="#">About</a> </li> <li class="active"> <a href="<?php echo site_url("tracklist"); ?>">Track List</a> </li> <li> <a href="<?php echo site_url("stats"); ?>">Stats</a> </li> <li> <a href="#">Stats</a> </li> </ul> Here's the css: #menu { float: right; } #menu li { display: inline-block; padding: 5px; background-color: #932996; border-bottom: solid 1px #932996; } #menu li:hover { border-bottom: solid 3px #FF0000; } #menu li.active { background-color: #58065e; } I'm sure it's something simple I'm doing wrong... but can someone shed some light on this for me? Sorry for the lengthy post (my first on stackoverflow).

    Read the article

  • How do I classify using GLCM and SVM Classifier in Matlab?

    - by Gomathi
    I'm on a project of liver tumor segmentation and classification. I used Region Growing and FCM for liver and tumor segmentation respectively. Then, I used Gray Level Co-occurence matrix for texture feature extraction. I have to use Support Vector Machine for Classification. But I don't know how to normalize the feature vectors. Can anyone tell how to program it in Matlab? To the GLCM program, I gave the tumor segmented image as input. Was I correct? If so, I think, then, my output will also be correct. My glcm coding, as far as I have tried is, I = imread('fzliver3.jpg'); GLCM = graycomatrix(I,'Offset',[2 0;0 2]); stats = graycoprops(GLCM,'all') t1= struct2array(stats) I2 = imread('fzliver4.jpg'); GLCM2 = graycomatrix(I2,'Offset',[2 0;0 2]); stats2 = graycoprops(GLCM2,'all') t2= struct2array(stats2) I3 = imread('fzliver5.jpg'); GLCM3 = graycomatrix(I3,'Offset',[2 0;0 2]); stats3 = graycoprops(GLCM3,'all') t3= struct2array(stats3) t=[t1;t2;t3] xmin = min(t); xmax = max(t); scale = xmax-xmin; tf=(x-xmin)/scale Was this a correct implementation? Also, I get an error at the last line. My output is: stats = Contrast: [0.0510 0.0503] Correlation: [0.9513 0.9519] Energy: [0.8988 0.8988] Homogeneity: [0.9930 0.9935] t1 = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 Columns 7 through 8 0.9930 0.9935 stats2 = Contrast: [0.0345 0.0339] Correlation: [0.8223 0.8255] Energy: [0.9616 0.9617] Homogeneity: [0.9957 0.9957] t2 = Columns 1 through 6 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 Columns 7 through 8 0.9957 0.9957 stats3 = Contrast: [0.0230 0.0246] Correlation: [0.7450 0.7270] Energy: [0.9815 0.9813] Homogeneity: [0.9971 0.9970] t3 = Columns 1 through 6 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9971 0.9970 t = Columns 1 through 6 0.0510 0.0503 0.9513 0.9519 0.8988 0.8988 0.0345 0.0339 0.8223 0.8255 0.9616 0.9617 0.0230 0.0246 0.7450 0.7270 0.9815 0.9813 Columns 7 through 8 0.9930 0.9935 0.9957 0.9957 0.9971 0.9970 ??? Error using ==> minus Matrix dimensions must agree. The images are:

    Read the article

  • When was sys.dm_os_wait_stats last cleared?

    - by SQLOS Team
    The sys.dm_os_wait_stats DMV provides essential metrics for diagnosing SQL Server performance problems. Returning incrementally accumulating information about all the completed waits encountered by executing threads it is a useful way to identify bottlenecks such as IO latency issues or waits on locks. The counters are reset each time SQL server is restarted, or when the following command is run: DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR); To make sense out of these wait values you need to know how they change over time. Suppose you are asked to troubleshoot a system and you don't know when the wait stats were last zeroed. Is there any way to find the elapsed time since this happened? If the wait stats were not cleared using the DBCC SQLPERF command then you can simply correlate the stats with the time SQL Server was started using the sqlserver_start_time column introduced in SQL Server 2008 R2: SELECT sqlserver_start_time from sys.dm_os_sys_info However how do you tell if someone has run DBCC SQLPERF ('sys.dm_os_wait_stats', CLEAR) since the server was started, and if they did, when? Without this information the initial, or historical, wait_stats have less value until you can measure deltas over time. There is a way to at least estimate when the stats were last cleared, by using the wait stats themselves and choosing a thread that spends most of its time sleeping. A good candidate is the SQL Trace incremental flush task, which mostly sleeps (in 4 second intervals) and in between it attempts to flush (if there are new events – which is rare when only default trace is running) – so it pretty much sleeps all the time. Hence the time it has spent waiting is very close to the elapsed time since the counter was reset. Credit goes to Ivan Penkov in the SQLOS dev team for suggesting this. Here's an example (excuse formatting): 144 seconds after the server was started: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                               wait_time_ms--------------------------------------------------------------------------------------------------------------- XE_DISPATCHER_WAIT                                      242273LAZYWRITER_SLEEP                                          146010LOGMGR_QUEUE                                                145412DIRTY_PAGE_POLL                                             145411XE_TIMER_EVENT                                               145216REQUEST_FOR_DEADLOCK_SEARCH             145194SQLTRACE_INCREMENTAL_FLUSH_SLEEP    144325SLEEP_TASK                                                        73359BROKER_TO_FLUSH                                           73113PREEMPTIVE_OS_AUTHENTICATIONOPS       143 (10 rows affected) Reset: DBCC SQLPERF('sys.dm_os_wait_stats', CLEAR)" DBCC execution completed. If DBCC printed error messages, contact your system administrator. After 8 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                 wait_time_ms--------------------------------------------------------------------------------------------------------------------- REQUEST_FOR_DEADLOCK_SEARCH              10013LAZYWRITER_SLEEP                                           8124SQLTRACE_INCREMENTAL_FLUSH_SLEEP     8017LOGMGR_QUEUE                                                 7579DIRTY_PAGE_POLL                                              7532XE_TIMER_EVENT                                                5007BROKER_TO_FLUSH                                            4118SLEEP_TASK                                                         3089PREEMPTIVE_OS_AUTHENTICATIONOPS        28SOS_SCHEDULER_YIELD                                   27 (10 rows affected)   After 12 seconds: select top 10 wait_type, wait_time_ms from sys.dm_os_wait_stats order by wait_time_ms desc wait_type                                                                  wait_time_ms------------------------------------------------------------------------------------------------------ REQUEST_FOR_DEADLOCK_SEARCH               15020LAZYWRITER_SLEEP                                            14206LOGMGR_QUEUE                                                  14036DIRTY_PAGE_POLL                                               13973SQLTRACE_INCREMENTAL_FLUSH_SLEEP      12026XE_TIMER_EVENT                                                 10014SLEEP_TASK                                                          7207BROKER_TO_FLUSH                                             7207PREEMPTIVE_OS_AUTHENTICATIONOPS         57SOS_SCHEDULER_YIELD                                     28 (10 rows affected) It may not be accurate to the millisecond, but it can provide a useful data point, and give an indication whether the wait stats were manually cleared after startup, and if so approximately when. - Guy     Originally posted at http://blogs.msdn.com/b/sqlosteam/

    Read the article

< Previous Page | 4 5 6 7 8 9 10 11 12 13 14 15  | Next Page >