Search Results

Search found 17762 results on 711 pages for 'sr query'.

Page 674/711 | < Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >

  • HAProxy + Percona XtraDB Cluster

    - by rottmanj
    I am attempting to setup HAproxy in conjunction with Percona XtraDB Cluster on a series of 3 EC2 instances. I have found a few tutorials online dealing with this specific issue, but I am a bit stuck. Both the Percona servers and the HAproxy servers are running ubuntu 12.04. The HAProxy version is 1.4.18, When I start HAProxy I get the following error: Server pxc-back/db01 is DOWN, reason: Socket error, check duration: 2ms. I am not really sure what the issue could be. I have verified the following: EC2 security groups ports are open Poured over my config files looking for issues. I currently do not see any. Ensured that xinetd was installed Ensured that I am using the correct ip address of the mysql server. Any help with this is greatly appreciated. Here are my current config Load Balancer /etc/haproxy/haproxy.cfg global log 127.0.0.1 local0 log 127.0.0.1 local1 notice maxconn 4096 user haproxy group haproxy debug #quiet daemon defaults log global mode http option tcplog option dontlognull retries 3 option redispatch maxconn 2000 contimeout 5000 clitimeout 50000 srvtimeout 50000 frontend pxc-front bind 0.0.0.0:3307 mode tcp default_backend pxc-back frontend stats-front bind 0.0.0.0:22002 mode http default_backend stats-back backend pxc-back mode tcp balance leastconn option httpchk server db01 10.86.154.105:3306 check port 9200 inter 12000 rise 3 fall 3 backend stats-back mode http balance roundrobin stats uri /haproxy/stats MySql Server /etc/xinetd.d/mysqlchk # default: on # description: mysqlchk service mysqlchk { # this is a config for xinetd, place it in /etc/xinetd.d/ disable = no flags = REUSE socket_type = stream port = 9200 wait = no user = nobody server = /usr/bin/clustercheck log_on_failure += USERID #only_from = 0.0.0.0/0 # recommended to put the IPs that need # to connect exclusively (security purposes) per_source = UNLIMITED } MySql Server /etc/services Added the line mysqlchk 9200/tcp # mysqlchk MySql Server /usr/bin/clustercheck # GNU nano 2.2.6 File: /usr/bin/clustercheck #!/bin/bash # # Script to make a proxy (ie HAProxy) capable of monitoring Percona XtraDB Cluster nodes properly # # Author: Olaf van Zandwijk <[email protected]> # Documentation and download: https://github.com/olafz/percona-clustercheck # # Based on the original script from Unai Rodriguez # MYSQL_USERNAME="testuser" MYSQL_PASSWORD="" ERR_FILE="/dev/null" AVAILABLE_WHEN_DONOR=0 # # Perform the query to check the wsrep_local_state # WSREP_STATUS=`mysql --user=${MYSQL_USERNAME} --password=${MYSQL_PASSWORD} -e "SHOW STATUS LIKE 'wsrep_local_state';" 2>${ERR_FILE} | awk '{if (NR!=1){print $2}}' 2>${ERR_FILE}` if [[ "${WSREP_STATUS}" == "4" ]] || [[ "${WSREP_STATUS}" == "2" && ${AVAILABLE_WHEN_DONOR} == 1 ]] then # Percona XtraDB Cluster node local state is 'Synced' => return HTTP 200 /bin/echo -en "HTTP/1.1 200 OK\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is synced.\r\n" /bin/echo -en "\r\n" else # Percona XtraDB Cluster node local state is not 'Synced' => return HTTP 503 /bin/echo -en "HTTP/1.1 503 Service Unavailable\r\n" /bin/echo -en "Content-Type: text/plain\r\n" /bin/echo -en "\r\n" /bin/echo -en "Percona XtraDB Cluster Node is not synced.\r\n" /bin/echo -en "\r\n" fi

    Read the article

  • OpenWRT based gateway with dnsmasq and internal server with bind

    - by Peter
    I have router based on OpenWRT which has dnsmasq 2.59. Inside my local area network I have a NS server bind. This server has internal and external views for a couple of my domains. My router forwards port 53 TCP and UDP from outside IP (router WAN) to this server. For the external clients everything works fine. In order to organize the internal view, I decided to add the exception to /etc/dnsmasq.conf server=/mydomain1.com/192.168.1.1 server=/mydomain2.com/192.168.1.1 server=/mydomain3.com/192.168.1.1 (192.168.1.1 - IP address of the NS server) According to dnsmasq manstrong text: More specific domains take precendence over less specific domains, so: --server=/google.com/1.2.3.4 --server=/www.google.com/2.3.4.5 will send queries for *.google.com to 1.2.3.4, except *www.google.com, which will go to 2.3.4.5 this domain name with all the sub-domains is supposed to be forward to my NS server. Everything works (SOA, NS, MX, CNAME, TXT, SRV etc.) except for A-record: # nslookup -type=a mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 *** Can't find mydomain1.com: No answer 192.168.1.100 - IP address of my router (dnsmasq) However, I can get the answer for the TXT-record query: # nslookup -type=txt mydomain1.com Server: 192.168.1.100 Address: 192.168.1.100#53 mydomain1.com text = "v=spf1 include:mydomain1.com -all" When I just specify the local IP of my NS server (direct access to the server without using dnsmasq) then the results are: # nslookup -type=a mydomain1.com 192.168.1.1 Server: 192.168.1.1 Address: 192.168.1.1#53 Name: mydomain1.com Address: 192.168.1.1 There is a similar situation with the MX-record: C:\>nslookup -type=mx mydomain1.com Server: router.lan Address: 192.168.1.100 mydomain1.com MX preference = 10, mail exchanger = mail.mydomain1.com mydomain1.com nameserver = ns.mydomain1.com mail.mydomain1.com internet address = 192.168.1.1 ns.mydomain1.com internet address = 192.168.1.1 C:\>nslookup -type=a mail.mydomain1.com Server: router.lan Address: 192.168.1.100 *** No address (A) records available for mail.mydomain1.com This is a dig result: # dig +nocmd mydomain1.com any +multiline +noall +answer mydomain1.com. 86400 IN SOA ns.mydomain1.com. hostmaster.mydomain1.com. ( 121204007 ; serial 28800 ; refresh (8 hours) 7200 ; retry (2 hours) 604800 ; expire (1 week) 3600 ; minimum (1 hour) ) mydomain1.com. 86400 IN NS ns.mydomain1.com. mydomain1.com. 86400 IN A 192.168.1.1 mydomain1.com. 604800 IN MX 10 mail.mydomain1.com. mydomain1.com. 3600 IN TXT "v=spf1 include:mydomain1.com -all" When I try to ping: # ping mydomain1.com ping: cannot resolve mydomain1.com: Unknown host Is it a bug of dnsmasq 2.59? How to manage this problem?

    Read the article

  • grep simply fails when used on a few files

    - by Reid
    I've been trying for about the past 30 minutes to get this to work properly. grep is not exactly the most difficult thing to use, so I'm somewhat baffled as to why this won't work. The files I'm trying to use grep on are simple XHTML log files. Their names are in the format [email protected], though I don't think that should matter, and inside is simple XHTML. I copied one such log file to be testfile so you can see the output of some commands and why it's baffling to me: [~/.chatlogs_windows/dec] > whoami reid [~/.chatlogs_windows/dec] > type grep grep is /bin/grep [~/.chatlogs_windows/dec] > uname -a Linux reid-pc 2.6.35-22-generic #33-Ubuntu SMP Sun Sep 19 20:32:27 UTC 2010 x86_64 GNU/Linux [~/.chatlogs_windows/dec] > head -1 /etc/issue Linux Mint 10 Julia [~/.chatlogs_windows/dec] > ls -Alh | grep testfile -rw-r--r-- 1 reid reid 63K 2011-01-10 12:45 testfile [~/.chatlogs_windows/dec] > tail -3 testfile </body> </html> [~/.chatlogs_windows/dec] > file testfile testfile: XML document text [~/.chatlogs_windows/dec] > grep html testfile [~/.chatlogs_windows/dec] > grep body testfile [~/.chatlogs_windows/dec] > grep "</html>" testfile [~/.chatlogs_windows/dec] > grep "</body>" testfile [~/.chatlogs_windows/dec] > cat testfile | grep html [~/.chatlogs_windows/dec] > cat testfile | wc -l 231 [~/.chatlogs_windows/dec] > cat testfile | tail -3 </body> </html> [~/.chatlogs_windows/dec] > chmod a+rw testfile && ls -Alh | grep testfile -rw-rw-rw- 1 reid reid 63K 2011-01-10 12:45 testfile [~/.chatlogs_windows/dec] > grep html testfile That's what I'm attempting to do. I want to just use grep -ri query . in ~/.chatlogs_windows, which normally works perfectly for me... but for some reason, it completely fails at going through these files. If it matters, I copied these files off of my Windows 7 partition. But I chown'd them and gave myself all the appropriate permissions, and other programs (like cat) seem to read them just fine. I also copied testfile to testfile_unix and converted the line endings and tried that, but it didn't work either. I'm using zsh, but I tried it on bash and that failed too. Also, grep works normally: I tried it out on my documents folder and it worked flawlessly. If you need any more information, just let me know. I tried googling around, but I found no reason for grep to simply not work. Thanks in advance.

    Read the article

  • IIS/ASP.NET performance incident - Perfmon Current Annonymous Users going through roof but Requests/sec low

    - by Laurence
    Setup: ASP.NET 4.0 website on IIS 6.0 on Win 2003 64 bit, 8xCPUs, 16GB memory, separate SQL 2005 DB server. Had a serious slowdown today with any otherwise fairly well performing ASP.NET site. For a period of a couple of hours all page requests were taking a very long time to be served - e.g. 30-60s compared to usual 2s. The w3wp.exe's CPU and memory usage on the webserver was not much higher than normal. The application pool was not in the middle of recycling (and it hadn't recycled for several hours). Bottlenecks in the database were ruled out - no blocks occurring and query results were being returned quickly. I couldn't make any sense of it and set up the following Perfmon counters: Current Anonymous Users (for site in question) Get requests/sec (ditto) Requests/sec for the ASP.NET application running the site Get requests/sec was averaging 100-150. Requests/sec for ASP.NET was averaging 5-10. However Current Anonymous Users was around 200. And then as I was watching, the Current Anonymous Users began to climb steeply going up to about 500 within a few minutes. All this time Get requests/sec & Requests/sec for ASP.NET was if anything going down. I did a whole load of things (in a panic!) to try to get the site working, like shutting it down, recycling the app pool, and adding another worker process to the pool. I also extended the expiration time for content (in IIS under HTTP Headers) in an attempt to lower the number of requests for static files (there are a lot of images on the site). The site is now back to normal, and the counters are fairly steady and reading (added Current Connections counter): Current Anonymous Users : average 30 Get requests/sec : average 100 Requests/sec for ASP.NET : 5 Current Connections : average 300 I have also observed an inverse relationship between Get requests/sec & Current Anonymous Users. Usually both are fairly steady but there will be short periods when Get requests/sec will go down dramatically and Current Anonymous Users will go up in a perfect mirror image. Then they will flip back to their usual levels. So, my questions are: Thinking of the original performance issue - if w3wp.exe CPU, memory usage were normal and there was no DB bottleneck, what could explain page requests taking 20 times longer to be served than usual? What other counters should I be looking at if this happens again? What explains the inverse relationship between Get requests/sec & Current Anonymous Users? What could explain Current Anonymous Users going from 200 to 500 within a few minutes? Many thanks for any insight into this.

    Read the article

  • DNS settings for resolving Host name to IP not working?

    - by Hasas Ali Khan
    I want to access my IIS hosted application over LAN. First I installed a DNS server. The DNS configuration steps are: Go to DNS Manager - right click on System Name - click on configure a DNS Server. DNS Server wizard open -, click on next button - Select radio button "forward lookup zone" click on next button. In the second window. click on radio button "The server maintains the zone" and then click next. Give the zone name "example.com" Click on radio button, "Do Not allow dynamic updates". and then click next button. In the next window, click on radio button "No it should not forward query" and then click next button. Complete the configure a DNS server wizard and then click on finish button. After it is managing the DNS records: In DNS server wizard. open tree of forward lookup zone and right click on the new zone name "example.com" - properties and click on "Start of authority" and write values on text boxes serial number=1 primary server=systemname.domainname responsible person=hostmaster.domainname Click on server name, highlight domain name, click on edit button and enter IP address of the server where I host my application. Highlight new zone name and right click on it and click "New Host" option. In this window there are three text boxes: Name(user parent name if blank)=scoring Fully Qualified Domain Name=scoring.example.com IP Address= My IP Address and check on "Create associated pointer(PTR) record" and click on "Add Host" Host button and then click done button. I have host header for my application is "scoring" on port 80 and its working fine on server my application setting are I have change its, Advance setting --> Application Pool Identity --> Local System application can access on server with host name "scoring" but it can not access on machines on LAN. When I change LAN machine host file that is under, C:/windows/system32/driver/etc/host and edit it and enter host name with hosted machine IP like this: scoring 192.168.1.20 By making these changes I can run the application over LAN machines as I mentioned above DNS setting by which I can run App over LAN with out editing the client's host file. What mistake am I doing in this configuration?

    Read the article

  • DJBDNS DNSCache configuration, svscan won't start

    - by SecurityGate
    I've been wracking my brain the last few days trying to setup DJBDNS on my server. I haven't been having too much luck. I have been following the guide provided by the creator of DJBDNS: http://cr.yp.to/djbdns/run-server.html Here is a run-through of where I am: Both services are up: [root@Happycat tinydns]$ svstat /service/tinydns/ /service/tinydns/: up (pid 18224) 74454 seconds [root@Happycat tinydns]$ svstat /service/dnscache/ /service/dnscache/: up (pid 2733) 2184 seconds My /etc/resolv.conf file: nameserver 127.0.0.1 My $PATH: [root@Happycat ~]$ echo $PATH /usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/sbin:/usr/sbin:/var/qmail/bin/:/usr/nexkit/bin:/root/bin My tinydns/root/data records: ..:69.160.56.65:a:259200 .ns1.benwilk.com:69.160.56.65:a:259200 .ns2.benwilk.com:69.160.56.65:a:259200 .56.160.69.in-addr.arpa:69.160.56.65:a:259200 .56.160.69.in-addr.arpa:69.160.56.65:b:259200 =benwilk.com:69.160.56.65:86400 =openbarrel.net:69.160.56.65:86400 +www.openbarrel.net:69.160.56.65:86400 +www.benwilk.com:69.160.56.65:86400 Tiny dns can recognize the records set: [root@Happycat root]$ tinydns-get a benwilk.com 1 benwilk.com: 78 bytes, 1+1+1+1 records, response, authoritative, noerror query: 1 benwilk.com answer: benwilk.com 86400 A 69.160.56.65 authority: . 259200 NS a.ns additional: a.ns 259200 A 69.160.56.65 But then it comes to a grinding halt: svscan /service/tinydns/ supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to acquire log/supervise/lock: temporary failure supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist supervise: fatal: unable to acquire log/supervise/lock: temporary failure supervise: fatal: unable to start env/run: file does not exist supervise: fatal: unable to start supervise/run: file does not exist supervise: fatal: unable to start root/run: file does not exist I'm assuming I have to set something with DNScache, and to be honest, it gets a bit confusing. I'm not sure whether to set it's IP address to 127.0.0.1 or one of the other IP addresses on the system. What am I missing from here?

    Read the article

  • How do i enable innodb on ubuntu server 10.04

    - by Matt
    Here is my entire my.cnf [client] port = 3306 socket = /var/run/mysqld/mysqld.sock # Here is entries for some specific programs # The following values assume you have at least 32M ram # This was formally known as [safe_mysqld]. Both versions are currently parsed. [mysqld_safe] socket = /var/run/mysqld/mysqld.sock nice = 0 [mysqld] key_buffer = 224M sort_buffer_size = 4M read_buffer_size = 4M read_rnd_buffer_size = 4M myisam_sort_buffer_size = 12M query_cache_size = 44M # # * Basic Settings # # # * IMPORTANT # If you make changes to these settings and your system uses apparmor, you may # also need to also adjust /etc/apparmor.d/usr.sbin.mysqld. # user = mysql socket = /var/run/mysqld/mysqld.sock port = 3306 basedir = /usr datadir = /var/lib/mysql tmpdir = /tmp skip-external-locking # # Instead of skip-networking the default is now to listen only on # localhost which is more compatible and is not less secure. bind-address = 127.0.0.1 # # * Fine Tuning # #key_buffer = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 # This replaces the startup script and checks MyISAM tables if needed # the first time they are touched myisam-recover = BACKUP #max_connections = 100 #table_cache = 64 #thread_concurrency = 10 # # * Query Cache Configuration # query_cache_limit = 1M #query_cache_size = 16M # # * Logging and Replication # # Both location gets rotated by the cronjob. # Be aware that this log type is a performance killer. # As of 5.1 you can enable the log at runtime! #general_log_file = /var/log/mysql/mysql.log #general_log = 1 log_error = /var/log/mysql/error.log # Here you can see queries with especially long duration #log_slow_queries = /var/log/mysql/mysql-slow.log #long_query_time = 2 #log-queries-not-using-indexes # # The following can be used as easy to replay backup logs or for replication. # note: if you are setting up a replication slave, see README.Debian about # other settings you may need to change. #server-id = 1 #log_bin = /var/log/mysql/mysql-bin.log expire_logs_days = 10 max_binlog_size = 100M #binlog_do_db = include_database_name #binlog_ignore_db = include_database_name # # * InnoDB # # InnoDB is enabled by default with a 10MB datafile in /var/lib/mysql/. # Read the manual for more InnoDB related options. There are many! # # * Security Features # # Read the manual, too, if you want chroot! # chroot = /var/lib/mysql/ # # For generating SSL certificates I recommend the OpenSSL GUI "tinyca". # # ssl-ca=/etc/mysql/cacert.pem # ssl-cert=/etc/mysql/server-cert.pem # ssl-key=/etc/mysql/server-key.pem [mysqldump] quick quote-names max_allowed_packet = 16M [mysql] #no-auto-rehash # faster start of mysql but no tab completition [isamchk] key_buffer = 16M # # * IMPORTANT: Additional settings that can override those from this file! # The files must end with '.cnf', otherwise they'll be ignored. # !includedir /etc/mysql/conf.d/ And here is my show engines call....i have no idea what i need to do to enable innodb show engines; +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | Engine | Support | Comment | Transactions | XA | Savepoints | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ | MyISAM | DEFAULT | Default engine as of MySQL 3.23 with great performance | NO | NO | NO | | MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO | | BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO | | CSV | YES | CSV storage engine | NO | NO | NO | | MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO | | FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL | | ARCHIVE | YES | Archive storage engine | NO | NO | NO | +------------+---------+----------------------------------------------------------------+--------------+------+------------+ 7 rows in set (0.00 sec)

    Read the article

  • Hanging page loads every n loads - SOLVED

    - by Christian
    Hi Guys I recently moved my site to a new server (Apache 2, PHP5, MySQL5). The site is an Invision based forum. Every few posts / topics it just hangs. The data has been written because if you stop and reload, the post / thread is there. I thought it was a write issue initially, but nope. So, the data is written but the page load never completes. It doesn't leave the page where the data has been input. Whats the best way to trouble shoot this issue? The only thing I have done recently is reduce my MySQL timeouts, but I can't see that being an issue as the values are still big enough and there are no mentions of timeouts in the MySQL log. (For the record there is nothing in PHP's error log either) Thanks in advance! EDIT: I checked my server-status. It all looked ok, but I have a suspicion I was hitting my ServerLimit, so I doubled that. Also enabled my Keepalives. Will keep an eye on it. EDIT 2: Its now been a few days and this is still occuring. I have more info though; Apache is throwing seg faults, but enabling core dumps does not produce them. I have tried disabling the modules in apache but it just stops things from working. I fear it may actually be DNS related. If I watch Live Headers in Firefox, absolutely nothing happens during this 'hanging' period. After that, the responses come back fairly promptly. UPDATE (05/04): I built the latest versions of Apache and PHP from source, no luck. I then removed those and used the remi repo to update all my packages to the latest stable. Segfaults seem to have stopped, but the hanging is continuing. ini's are at; www.skylinesaustralia.com/php.ini www.skylinesaustralia.com/my.cnf www.skylinesaustralia.com/httpd.conf UPDATE - SOLVED! - The issue was having a gigantic query cache size in MySQL. It was 2GB, changing it to 64M sorted it. Thanks for all the help everybody, much appreciated!!

    Read the article

  • Innodb Queries Slow

    - by user105196
    I have redHat 5.3 (Tikanga) with Mysql 5.0.86 configued with RIAD 10 HW, I run an application inquiries from Mysql/InnoDB and MyIsam tables, the queries are super fast,but some quires on Innodb tables sometime slow down and took more than 1-3 seconds to run and these queries are simple and optimized, this problem occurred just on innodb tables in different time with random queries. Why is this happening only to Innodb tables? the below is the Innodb status and some Mysql variables: show innodb status\G ************* 1. row ************* Status: 120325 10:54:08 INNODB MONITOR OUTPUT Per second averages calculated from the last 19 seconds SEMAPHORES OS WAIT ARRAY INFO: reservation count 22943, signal count 22947 Mutex spin waits 0, rounds 561745, OS waits 7664 RW-shared spins 24427, OS waits 12201; RW-excl spins 1461, OS waits 1277 TRANSACTIONS Trx id counter 0 119069326 Purge done for trx's n:o < 0 119069326 undo n:o < 0 0 History list length 41 Total number of lock structs in row lock hash table 0 LIST OF TRANSACTIONS FOR EACH SESSION: ---TRANSACTION 0 0, not started, process no 29093, OS thread id 1166043456 MySQL thread id 703985, query id 5807220 localhost root show innodb status FILE I/O I/O thread 0 state: waiting for i/o request (insert buffer thread) I/O thread 1 state: waiting for i/o request (log thread) I/O thread 2 state: waiting for i/o request (read thread) I/O thread 3 state: waiting for i/o request (write thread) Pending normal aio reads: 0, aio writes: 0, ibuf aio reads: 0, log i/o's: 0, sync i/o's: 0 Pending flushes (fsync) log: 0; buffer pool: 0 132777 OS file reads, 689086 OS file writes, 252010 OS fsyncs 0.00 reads/s, 0 avg bytes/read, 0.00 writes/s, 0.00 fsyncs/s INSERT BUFFER AND ADAPTIVE HASH INDEX Ibuf: size 1, free list len 366, seg size 368, 62237 inserts, 62237 merged recs, 52881 merges Hash table size 8850487, used cells 3698960, node heap has 7061 buffer(s) 0.00 hash searches/s, 0.00 non-hash searches/s LOG Log sequence number 15 3415398745 Log flushed up to 15 3415398745 Last checkpoint at 15 3415398745 0 pending log writes, 0 pending chkp writes 218214 log i/o's done, 0.00 log i/o's/second BUFFER POOL AND MEMORY Total memory allocated 4798817080; in additional pool allocated 12342784 Buffer pool size 262144 Free buffers 101603 Database pages 153480 Modified db pages 0 Pending reads 0 Pending writes: LRU 0, flush list 0, single page 0 Pages read 151954, created 1526, written 494505 0.00 reads/s, 0.00 creates/s, 0.00 writes/s No buffer pool page gets since the last printout ROW OPERATIONS 0 queries inside InnoDB, 0 queries in queue 1 read views open inside InnoDB Main thread process no. 29093, id 1162049856, state: waiting for server activity Number of rows inserted 77675, updated 85439, deleted 0, read 14377072495 0.00 inserts/s, 0.00 updates/s, 0.00 deletes/s, 0.00 reads/s END OF INNODB MONITOR OUTPUT 1 row in set, 1 warning (0.02 sec) read_buffer_size = 128M sort_buffer_size = 256M tmp_table_size = 1024M innodb_additional_mem_pool_size = 20M innodb_log_file_size=10M innodb_lock_wait_timeout=100 innodb_buffer_pool_size=4G join_buffer_size = 128M key_buffer_size = 1G can any one help me ?

    Read the article

  • Domain: Netlogon event sequence

    - by Bob
    I'm getting really confused, reading tutorials from SAMBA howto, which is hell of a mess. Could you write step-by-step, what events happen upon NetLogon? Or in particular, I can't get these things: I really can't get the mechanism of action of LDAP and its role. Should I think of Active Directory LDS as of its superset? What're the other roles of AD and why this term is nearly a synonym of term "domain"? What's the role of LDAP in the remote login sequence? Does it store roaming user profiles? Does it store anything else? How it is called (are there any upper-level or lower-level services that use it in the course of NetLogon)? How do I join a domain. On the client machine I just use the Domain Controller admin credentials, but how do I prepare the Domain Controller for a new machine to join it. What's that deal of Machine trust accounts? How it is used? Suppose, I've just configured a machine to join a domain, created its machine trust, added its data to the domain controller. How would that machine find WINS server to query it for Domain Controller NetBIOS name? Does any computer name, ending with <1C type, correspond to domain controller? In what cases Kerberos and LM/NTLM are used for authentication? Where are password hashes stored in, say, Windows2000 domain controller? Right in the registry? What is SAM - is it a service, responsible for authentication and sending/storing those passwords and accompanying information, such as groups policies etc.? Who calls it? Does it use Active Directory? What's the role of NetBIOS except by name service? Can you exemplify a scenario of its usage as a "datagram distribution service for connectionless communication" or "session service for connection-oriented communication"? (quoted taken from http://en.wikipedia.org/wiki/NetBIOS_Frames_protocol description of NetBIOS roles) Thanks and sorry for many questions.

    Read the article

  • DPM 2010 PowerShell Script to Easily Restore Multiple Files

    - by bmccleary
    I’ve got what I thought would be a simple task with Data Protection Manager 2010 that is turning out to be quite frustrating. I have a file server on one server and it is the only server in a protection group. This file server is the repository for a document management application which stores the files according to the data within a SQL database. Sometimes users inadvertently delete files from within our application and we need to restore them. We have all the information needed to restore the files to include the file name, the folder that the file was stored in and the exact date that the file was deleted. It is easy for me to restore the file from within the DPM console since we have a recovery point created every day, I simply go to the day before the delete, browse to the proper folder and restore the file. The problem is that using the DPM console, the cumbersome wizard requires about 20 mouse clicks to restore a single file and it takes 2-4 minutes to get through all the windows. This becomes very irritating when a client needs 100’s of files restored… it takes all day of redundant mouse clicks to restore the files. Therefore, I want to use a PowerShell script (and I’m a novice at PowerShell) to automate this process. I want to be able to create a script that I pass in a file name, a folder, a recovery point date (and a protection group/server name if needed) and simply have the file restored back to its original location with some sort of success/failure notification. I thought it was a simple basic task of a backup solution, but I am having a heck of a time finding the right code. I have seen the sample code at http://social.technet.microsoft.com/wiki/contents/articles/how-to-use-a-windows-powershell-script-to-recover-an-item-in-data-protection-manager.aspx that I have tried to follow, but it doesn’t accomplish what I really want to do (it’s too simplistic) and there are errors in the sample code. Therefore, I would like to get some help writing a script to restore these files. An example of the known values to restore the data are: DPM Server: BACKUP01 Protection Group: Document Repository Data Protected Server: FILER01 File Path: R:\DocumentRepository\ToBackup\ClientName\Repository\2010\07\24\filename.pdf Date Deleted: 8/2/2010 (last recovery point = 8/1/2010) Bonus Points: If you can help me not only create this script, but also show me how to automate by providing a text file with the above information that the PowerShell script loops through, or even better, is able to query our SQL server for the needed data, then I would be more than willing to pay for this development.

    Read the article

  • Spotlight Infinite Indexing issue (external data drive)

    - by Manca Weeks
    This is an external drive, formerly a boot drive which is now in use only to access music files (sibelius, audio, midi, live, logic etc.) without transferring the data into a new boot system, partly because of the issue I am about to describe, but mostly because the majority of the data is mainly there for archival purposes. The user is a composer and prominent musician and needs to be able to rehash the data at will. I have tried several things - here is a list: - make complete filesystem clone with antonio diaz's ddrescue - run Disk Warrior on copy, repair whatever errors occurred - wipe out all ACLs on entire drive - set all permissions to the same value - wide open 777 - remove any system data (applications, system files, including hidden files to the best of my knowledge) by selecting only non-system/app data and using Carbon Copy Cloner to put only the data of interest onto a newly formatted drive - transfer data to newly formatted drive folder by folder, resetting the spotlight index in between adding each to observe for issues (interesting here is that no issues occurred except for in Documents folder - when I transferred only the Documents folder to a newly formatted drive on its own - no trouble. It appears almost as thought it may not be the content but the quantity or specific combination of data that results in problems) - use DataRescue to transfer the data to yet another newly formatted drive to expose any missed hidden files Between each of the above steps I stopped Spotlight (search for anything beginning with md in Activity Monitor - All Processes and quitting it), deleted the .Spotlight-V100 directory from the affected drive. Restart Splotlight indexing by adding drive to Spotlight privacy list and removing it. In each case the same issue occurs - Spotlight begins indexing normally (or so it seems), then the index estimated time increases, usually to 4 hours remaining. This is where it gets stuck and continues to predict 4 hours remaining but never finishes. Sometimes I can't eject the drive and have to quit the md.. processes from Activity Monitor to be able to eject the drive without Force Eject. Once I disconnect the drive after the 4 hours remaining situation - if I reattach it, Spotlight forever estimates remaining time and never gets going again. So there it is. It is apparently not a filesystem issue, not a permissions issue and not tied to any particular piece of hardware or protocol (used USB and FW drives). I have tried this on several machines (3 to be precise) and in 10.5.8 and 10.6.5. Simply disabling Spotlight on this volume is not an option because the owner has no clue where things are as the data on the volume dates back to music projects and compositions from 2003 and before. He needs to be able to query for results. Anyone got any ideas? Thanks, M

    Read the article

  • Biztalk 2009 install help

    - by _pemex99_
    Hi, I try to install Biztalk2009, with SQL 2008R2CTPNov, on Win Server 2008. I'm blocked at the configuration step "groups" : [19:22:18 Info Configuration Framework]Configuring feature: WMI [19:22:18 Info BtsCfg] Entering function: CBtsCfg::ConfigureFeature [19:22:18 Info BtsCfg] Configuring feature: WMI [19:22:18 Info BtsCfg] Entering function: CBtsCfg::IsSelectedAnswer [19:22:18 Info BtsCfg] Leaving function: CBtsCfg::IsSelectedAnswer [19:22:18 Info BtsCfg] Entering function: CWMI::Connect [19:22:18 Info BtsCfg] WMI is already connected [19:22:18 Info BtsCfg] Leaving function: CWMI::Connect [19:22:18 Info ConfigHelper] NT group BizTalk Server Operators was not created because it already exists [19:22:18 Info ConfigHelper NetAPI Info: ] Le groupe local spécifié existe déjà. [19:22:18 Info ConfigHelper] NT group BizTalk Server Administrators was not created because it already exists [19:22:18 Info ConfigHelper NetAPI Info: ] Le groupe local spécifié existe déjà. [19:22:18 Info BtsCfg] Entering function: CWMI::CreateGroup 2010-01-14 19:22:18:0527 [INFO] WMI CWMIInstProv::PutInstance() try to acquire lock 2010-01-14 19:22:18:0539 [INFO] WMI CWMIInstProv::PutInstance() lock acquired successfully 2010-01-14 19:22:18:0546 [INFO] WMI CWMIInstProv::VerifyMgmtDbCompatibility(CInstance) started 2010-01-14 19:22:18:0553 [INFO] WMI CWMIInstProv::VerifyMgmtDbCompatibility(CInstance) finished successfully 2010-01-14 19:22:18:0564 [INFO] WMI CWMIInstProv::PutInstance(MSBTS_GroupSetting.MgmtDbName="BizTalkMgmtDb",MgmtDbServerName="ECTXEVLBZTK") started 2010-01-14 19:22:18:0572 [INFO] WMI CAdapter::ConvertWMI2Admin() started 2010-01-14 19:22:18:0581 [INFO] WMI CDataContainer::SetWCHAR() - Possible problem: item value is overwritten 2010-01-14 19:22:18:0591 [INFO] WMI CAdapter::ConvertWMI2Admin() finished with HR=0 2010-01-14 19:22:18:0611 [INFO] WMI QueryStringValue query regkey 'MgmtDBServer' 2010-01-14 19:22:18:0620 [INFO] WMI CAdmCoreGroupInst::TryCreateNewGroup() started 2010-01-14 19:22:18:0632 [INFO] WMI Creating Mgmt database... 2010-01-14 19:22:18:0641 [INFO] WMI Calling CDataSource.Open() against ECTXEVLBZTK\master 2010-01-14 19:22:18:0792 [INFO] WMI CDataSource.Open() returned 2010-01-14 19:22:18:0810 [WARN] AdminLib GetBTSMessage: hrErr=80040e1d; Msg=Error "0x80040E1D" occurred.; 2010-01-14 19:22:18:0824 [WARN] AdminLib GetBTSMessage: hrErr=c0c02524; Msg=Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred.; 2010-01-14 19:22:18:0835 [ERR] WMI Failed in pAdmInst->Create() in CWMIInstProv::PutInstance(). HR=c0c02524 2010-01-14 19:22:18:0846 [ERR] WMI WMI error description is generated: Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred. 2010-01-14 19:22:18:0860 [INFO] WMI CWMIInstProv::PutInstance() finished. HR=c0c02524 [19:22:18 Error BtsCfg] f:\bt\890\private\source\setup\prod\btssetup\btscfg\btswmi.cpp(358): FAILED hr = c0c02524 [19:22:18 Error BtsCfg] Failed to create Management database "BizTalkMgmtDb" on server "ECTXEVLBZTK". Error "0x80040E1D" occurred. It seems that the install can't create Managment database, But the SSO database is created OK... Has someone a clue ?

    Read the article

  • Network Services disabled (not starting) on Windows XP

    - by Rickesh John
    I am currently running Windows XP Service Pack 3 on my system. But today, when I failed to connect to the internet, via a LAN cable, I realized that almost all of the vital network services had stopped functioning. Any attempts to start it through services.msc gives me the following message: Could not start the DNS Client Service on Local Computer Error 1068: The dependency service group failed to start All my software or services that are related to networking have stopped functioning, for example, Windows Firewall is turned off permanently, so is my Avast Anti-Virus' service of Real Time Shields and Web Shield. When I insert the LAN wire into my laptop, it registers itself, but this is what I get when I do a ping localhost C:>ping localhost Unable to contact IP driver, error code 2 Moveover, with ipconfig I get this : Windows IP Configuration An internal error occurred: The request is not supported. Please contact Microsoft Product Support Services for further help. Additional Information: Unable to query host name On some further poking around, I saw that none of the "NETWORK SERVICE" process in task manager, except svchost.exe were running. Also, when I first opened the task manager, I saw some 20 processes running with username column empty for most of them. With some search in Google, I found out that these services were important, DHCP DNS Net logon Network connection Network location Awareness TCP/IP Net BIOS Helper none of them, except Network Connections are working, they do not start. The event viewer of my system shows a bunch of 7000 and 7001 event errors. I have tried re installing the network driver, booting in safe mode with networking and tried to enable those services mentioned above. I had disabled System Restore some time back, so I have no restore points for my system. I tried a lot of things from Google searches but none of them worked. Also, with such a long list of issue, I am a little confused as to what should I search on the internet. :( One more thing I would like to mention, previous morning, my anti-virus Avast detected a RootKit buried deep in my system folders. It was removed, but maybe this was a problem caused by the root kit. I did run a boot-time scan but no viruses were found. Please please please advice. Is formatting and re-installation of Windows my only option?

    Read the article

  • How to configure sendmail to relay through a specific server

    - by ErebusBat
    I have a tiny home server setup behind my cable modem (bresnan communications). I want to be able for this box to send out email (not receive) for notifications and whatnot. What I have already done: I have installed and configured sendmail. I have added mail.bresnan.net as my SMART_HOST directive. What I belive the problem is When I attempt to send an email I get the following in my mail log: Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: from=aburns, size=140, class=0, nrcpts=1, msgid=<[email protected]>, relay=aburns@localhost Dec 22 10:24:17 batcave sm-mta[1531]: oBMHOHWZ001531: from=<[email protected]>, size=397, class=0, nrcpts=1, msgid=<[email protected]>, proto=ESMTP, daemon=MTA-v4, relay=localhost [127.0.0.1] Dec 22 10:24:17 batcave sendmail[1530]: oBMHOHrs001530: to=<[email protected]>, ctladdr=aburns (1000/1000), delay=00:00:00, xdelay=00:00:00, mailer=relay, pri=30140, relay=[127.0.0.1] [127.0.0.1], dsn=2.0.0, stat=Sent (oBMHOHWZ001531 Message accepted for delivery) Dec 22 10:24:18 batcave sm-mta[1517]: oBMH9mVv001357: to=<[email protected]>, ctladdr=<[email protected]> (1000/1000), delay=00:14:30, xdelay=00:00:42, mailer=relay, pri=300339, relay=pmx0.bresnan.net. [69.145.248.1], dsn=4.0.0, stat=Deferred: Connection timed out with pmx0.bresnan.net. You can see where the message is accepted for delivery by my sendmail server, then where it attempts to hand off to bresnan's server and it timesout. This is where my question is. Asstute readers will notice that pmx0.bresnan.net is not what I have my SMART_HOST directive set as. This is the (outside?) MX server for the bresnan.com/net domain. Apparently bresnan has their network configured so that you can not access this server from within their own network and instead must use the mail.bresnan.net server (which I can connect to). The problem is that I don't know how to tell sendmail to use this server and not the domain. What I have tried Setting a hosts entry so that the pmx0 server points to the mail IP address. This doesn't work, which makes sense as sendmail is obviously doing an MX query to find the server which returns the IP so there is never a need to do a 'normal' DNS resolve so the hosts file never gets involved.

    Read the article

  • Repeated requests on our server?

    - by pitty.platsch
    I encountered something strange in the access log of our Apache server which I cannot explain. Requests for webpages that I or my colleagues do from the office's Windows network get repeated by another IP (that we don't know) a couple of seconds later. The user agent repeating our requests is Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; InfoPath.2) Has anyone an idea? Update: I've got some more information now. The referrer of the replicate is set to the URL I requested before and it's not the exact same request as the protocol version is changed from 'HTTP/1.1' to 'HTTP/1.0'. The IP is not just one, it's just one of a subnet (80.40.134.*). It's just the first request to a resource that's get repeated, so it seems the "spy" is building up some kind of cache of visited places. The repeater is also picky. I tried randomly URLs with different HTTP status codes and different file patterns. 301s and 200s are redone, 404s not. Image extensions seem to be ignored. While doing my tests I discovered that this behavior seems to be common as I found other clients visiting just after the first requests: 66.249.73.184 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 10952 "-" "Mediapartners-Google" 50.17.125.180 - - [25/Oct/2012:10:51:33 +0100] "GET /foobar/ HTTP/1.1" 200 41312 "-" "Mozilla/5.0 (compatible; proximic; +http://www.proximic.com/info/spider.php)" I wasn't aware about this practice, so I don't see it that much as a threat anymore. I still want to find out who this is, so any further help is appreciated. I'll try later if this also happens if I query some other server where I have access to the access logs and will update here then.

    Read the article

  • SQL Server suddenly using only a small portion of CPU.

    - by hermiod
    We've got a Windows 2008 R2 server running SQL Server 2008. All of a sudden, the SQLServer process is refusing to go above 20% CPU usage. As of last week, when running a heavy query against the db it would rise to 100% usage as I would expect. We've had this server for a while and it seems strange that it would just suddenly have this limit. This limit is causing our queries to take a lot longer than they normally would. No one has (knowingly at least) made any changes to the server configuration. After a bit of investigation, I discovered the sys.dm_os_sys_memory view. This shows 'available physical memory is high' bu at the same time the available physical memory is 339552kb where as the total is 4193848kb. It is worth noting that this is a virtual server running on vmware. Is there a setting somewhere with in SQL Server that sets the maximum CPU usage? I've found the settings in resource governor, although this is currently off as it always has been. We have recently started using Spotlight for SQL Server by Quest Software. It's playback database was located on this server for a short time this morning, I first noticed the problem shortly afterwards, although I hadn't been doing any queries prior to this so I don't know if this is the point at which the problem began, however the database was working as expected on Friday afternoon. The Windows log shows that the following settings were applied to the SpotlightPlaybackDatabase when it was created. 02/21/2011 08:45:02,spid60,Unknown,Setting database option TORN_PAGE_DETECTION to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option MULTI_USER to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option READ_WRITE to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_UPDATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CREATE_STATISTICS to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option ANSI_WARNINGS to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option CONCAT_NULL_YIELDS_NULL to ON for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option RECOVERY to SIMPLE for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option QUOTED_IDENTIFIER to OFF for database SpotlightPlaybackDatabase. 02/21/2011 08:45:02,spid60,Unknown,Setting database option AUTO_CLOSE to OFF for database SpotlightPlaybackDatabase. Could any of these settings changes modified the settings applied to the whole server?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Why does this service refuse to start on Windows server 2003?

    - by PenguinCoder
    We have a Windows 2003 server with Cebos MQ1 (ver. 7 and ver. GRI) products installed that have been operational for years. After installing Microsoft 2010 C++ Redistributable package needed for other development, the MQ1 GRI service now fails to start. Event logs showed that two additional updates (.NET4 and the 2010 C++ Redistributable SP2) where installed by the redistributable as well. As soon as we discovered the MQ1 service was not starting properly, we removed these three installed packages. However the service still does not start; the dialog that pops up states 'The service started then stopped. '. Event logs when we attempt to start the service show nothing; IE: No errors, crashes, failures, or other information related to this service. Executing the MQ1Serv.exe directly specifies an issue of 'Missing command line operation, must specify install, uninstall and company abbreviation.' sc query MQ1Service(GRI) shows a clean exit for the Win32ExitCode of 0x0. Attempting to reinstall the client or server software gives an error of 'The procedure entry point ReInitializeCriticalSection could not be located in the dynamic link library KERNEL32.dll.' at the 'Registering Libraries' stage. At this point, further research has stated that the required function is in URL.dll and to verify the library is not corrupted. Running an sfc /scannow on the server has replaced a few DLLS; including the URL.DLL to versions from 2005. This actually broke other applications which required a reinstall (one of them being IE 7). After reinstall and updates, url.dll version is 7.0.5730.13 (2009) and Kernel32.dll is version 5.2.3790.4480 (2009). The MQ1 GRI service still will not start, specifying the same error as previous 'Service started then stopped'. Running a disassembler on Kernel32.dll and Url.dll show no functions named ReinitializeCriticalSection. Attempting the reinstall of the MQ1 client and server as well as starting the service again, fails once more. However, setting the compatibility mode on the MQ1 client install exe to 'Windows 95' actually gets the program to install. Setting the compatibility mode on the MQ1 server service does not enable it to start. I have been researching this problem for nearly a week and besides the advice to scan and replace url.dll, have come to no successful conclusions. This service was operational prior to the 2010 C++ install, without any additional parameters or settings. After removing the C++ install and all servicepacks/updates it installed silently, still does not correct the issue of the MQ1 GRI service not starting. Q: Has anyone else run into this or similar issue while attempting to get a service initialized? What have I overlooked or what else can I try in order to get this service started??

    Read the article

  • How to reliably map vSphere disks <-> Linux devices

    - by brianmcgee
    Task at hand After a virtual disk has been added to a Linux VM on vSphere 5, we need to identify the disks in order to automate the LVM storage provision. The virtual disks may reside on different datastores (e.g. sas or flash) and although they may be of the same size, their speed may vary. So I need a method to map the vSphere disks to Linux devices. Ideas Through the vSphere API, I am able to get the device info: Data Object Type: VirtualDiskFlatVer2BackingInfo Parent Managed Object ID: vm-230 Property Path: config.hardware.device[2000].backing Properties Name Type Value ChangeId string Unset contentId string "d58ec8c12486ea55c6f6d913642e1801" datastore ManagedObjectReference:Datastore datastore-216 (W5-CFAS012-Hybrid-CL20-004) deltaDiskFormat string "redoLogFormat" deltaGrainSize int Unset digestEnabled boolean false diskMode string "persistent" dynamicProperty DynamicProperty[] Unset dynamicType string Unset eagerlyScrub boolean Unset fileName string "[W5-CFAS012-Hybrid-CL20-004] l****9-000001.vmdk" parent VirtualDiskFlatVer2BackingInfo parent split boolean false thinProvisioned boolean false uuid string "6000C295-ab45-704e-9497-b25d2ba8dc00" writeThrough boolean false And on Linux I may read the uuid strings: [root@lx***** ~]# lsscsi -t [1:0:0:0] cd/dvd ata: /dev/sr0 [2:0:0:0] disk sas:0x5000c295ab45704e /dev/sda [3:0:0:0] disk sas:0x5000c2932dfa693f /dev/sdb [3:0:1:0] disk sas:0x5000c29dcd64314a /dev/sdc As you can see, the uuid string of disk /dev/sda looks somehow familiar to the string that is visible in the VMware API. Only the first hex digit is different (5 vs. 6) and it is only present to the third hyphen. So this looks promising... Alternative idea Select disks by controller. But is it reliable that the ascending SCSI Id also matches the next vSphere virtual disk? What happens if I add another DVD-ROM drive / USB Thumb drive? This will probably introduce new SCSI devices in between. Thats the cause why I think I will discard this idea. Questions Does someone know an easier method to map vSphere disks and Linux devices? Can someone explain the differences in the uuid strings? (I think this has something to do with SAS adressing initiator and target... WWN like...) May I reliably map devices by using those uuid strings? How about SCSI virtual disks? There is no uuid visible then... This task seems to be so obvious. Why doesn't Vmware think about this and simply add a way to query the disk mapping via Vmware Tools?

    Read the article

  • SQLS Timeouts - High Reads in Profiler

    - by lb01
    Hi I've audited a SQLS2008 server with Profiler for one day.. the overhead didn't seem to trouble this new client my company has. They are using a legacy VB6 application as a front-end. They're experiencing timeouts once SQLS RAM usage is high. The server is currently running x64 sqls2008 on a VM with nearly 9 GB of RAM. SQL Server's 'max server memory option' is currently set to 6GB. I've put the results of the trace in a table and queried them using this query. SELECT TextData, ApplicationName, Reads FROM [TraceWednesday] WHERE textdata is not null and EventClass = 12 GROUP BY TextData, ApplicationName, Reads ORDER BY Reads DESC As I expected, some values are very high. Top Reads, in pages. 2504188 1965910 1445636 1252433 1239108 1210153 1088580 1072725 Am I correct in thinking that the top one (2504188 pages) is 20033504 KB, which then is roughly ~20'000 MB, 20GB? These queries are often executed and can take quite some time to run. Eventually RAM is used up because of the cache fattening, and timeouts occur once SQL cannot 'splash' pages in the buffer pool as much. Costs go up. Am I correct in my understanding? I've read that I should tune the associated T-SQL and create appropriate indices. Obviously cutting down the I/O would make SQL Server use less RAM. OR, maybe it might just slow down the process of chewing up the whole RAM. If a lot less pages are read, maybe it'll all run much better even when usage is high? (less time swapping, etc.) Currently, our only option is to restart SQL once a week when RAM usage is high, suddenly the timeouts disappear. SQL breathes again. I'm sure lots of DBAs have been in this situation.. I'm asking before I start digging out all of the bad T-SQL and put indices here and there, is there is something else I can do? Any advice except from what I know (not much yet..) Much appreciated. Leo.

    Read the article

  • Nginx reverse proxy with separate aliases

    - by gabeDel
    Interesting question I have this python code: import sys, bottle, gevent from bottle import * from gevent import * from gevent.wsgi import WSGIServer @route("/") def index(): yield "/" application=bottle.default_app() WSGIServer(('', port), application, spawn=None).serve_forever() that runs standalone with nignx infront of it as a reverse proxy. Now each of these pieces of code run separately but I run multiple of these per domain per project(directory) but the code thinks for some reason that it is top level and its not so when you go to mydomain.com/something it works but if you go to mydomain.com/something/ you will get an error. No I have tested and figured out that nginx is stripping the "something" from the request/query so that when you go to mydomain.com/something/ the code thinks you are going to mydomain.com// how do I get nginx to stop removing this information? Nginx site code: upstream mydomain { server 127.0.0.1:10100 max_fails=5 fail_timeout=10s; } upstream subdirectory { server 127.0.0.1:10199 max_fails=5 fail_timeout=10s; } server { listen 80; server_name mydomain.com; access_log /var/log/nginx/access.log; location /sub { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } location /subdir { proxy_pass http://subdirectory/; proxy_redirect off; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_max_temp_file_size 0; client_max_body_size 10m; client_body_buffer_size 128k; proxy_connect_timeout 90; proxy_send_timeout 90; proxy_read_timeout 90; proxy_buffer_size 4k; proxy_buffers 4 32k; proxy_busy_buffers_size 64k; proxy_temp_file_write_size 64k; } }

    Read the article

  • Why is my concurrency capacity so low for my web app on a LAMP EC2 instance?

    - by AMF
    I come from a web developer background and have been humming along building my PHP app, using the CakePHP framework. The problem arose when I began the ab (Apache Bench) testing on the Amazon EC2 instance in which the app resides. I'm getting pretty horrendous average page load times, even though I'm running a c1.medium instance (2 cores, 2GB RAM), and I think I'm doing everything right. I would run: ab -n 200 -c 20 http://localhost/heavy-but-view-cached-page.php Here are the results: Concurrency Level: 20 Time taken for tests: 48.197 seconds Complete requests: 200 Failed requests: 0 Write errors: 0 Total transferred: 392111200 bytes HTML transferred: 392047600 bytes Requests per second: 4.15 [#/sec] (mean) Time per request: 4819.723 [ms] (mean) Time per request: 240.986 [ms] (mean, across all concurrent requests) Transfer rate: 7944.88 [Kbytes/sec] received While the ab test is running, I run VMStat, which shows that Swap stays at 0, CPU is constantly at 80-100% (although I'm not sure I can trust this on a VM), RAM utilization ramps up to about 1.6G (leaving 400M free). Load goes up to about 8 and site slows to a crawl. Here's what I think I'm doing right on the code side: In Chrome browser uncached pages typically load in 800-1000ms, and cached pages load in 300-500ms. Not stunning, but not terrible either. Thanks to view caching, there might be at most one DB query per page-load to write session data. So we can rule out a DB bottleneck. I have APC on. I am using Memcached to serve the view cache and other site caches. xhprof code profiler shows that cached pages take up 10MB-40MB in memory and 100ms - 1000ms in wall time. Pages that would be the worst offenders would look something like this in xhprof: Total Incl. Wall Time (microsec): 330,143 microsecs Total Incl. CPU (microsecs): 320,019 microsecs Total Incl. MemUse (bytes): 36,786,192 bytes Total Incl. PeakMemUse (bytes): 46,667,008 bytes Number of Function Calls: 5,195 My Apache config: KeepAlive On MaxKeepAliveRequests 100 KeepAliveTimeout 3 <IfModule mpm_prefork_module> StartServers 5 MinSpareServers 5 MaxSpareServers 10 MaxClients 120 MaxRequestsPerChild 1000 </IfModule> Is there something wrong with the server? Some gotcha with the EC2? Or is it my code? Some obvious setting I should look into? Too many DNS lookups? What am I missing? I really want to get to 1,000 concurrency capacity, but at this rate, it ain't gonna happen.

    Read the article

  • TFS 2010 SDK: Connecting to TFS 2010 Programmatically&ndash;Part 1

    - by Tarun Arora
    Technorati Tags: Team Foundation Server 2010,TFS 2010 SDK,TFS API,TFS Programming,TFS ALM   Download Working Demo Great! You have reached that point where you would like to extend TFS 2010. The first step is to connect to TFS programmatically. 1. Download TFS 2010 SDK => http://visualstudiogallery.msdn.microsoft.com/25622469-19d8-4959-8e5c-4025d1c9183d?SRC=VSIDE 2. Alternatively you can also download this from the visual studio extension manager 3. Create a new Windows Forms Application project and add reference to TFS Common and client dlls Note - If Microsoft.TeamFoundation.Client and Microsoft.TeamFoundation.Common do not appear on the .NET tab of the References dialog box, use the Browse tab to add the assemblies. You can find them at %ProgramFiles%\Microsoft Visual Studio 10.0\Common7\IDE\ReferenceAssemblies\v2.0. using Microsoft.TeamFoundation.Client; using Microsoft.TeamFoundation.Framework.Client; using Microsoft.TeamFoundation.Framework.Common;   4. There are several ways to connect to TFS, the two classes of interest are, Option 1 – Class – TfsTeamProjectCollectionClass namespace Microsoft.TeamFoundation.Client { public class TfsTeamProjectCollection : TfsConnection { public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection); public TfsTeamProjectCollection(Uri uri); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials); public TfsTeamProjectCollection(Uri uri, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsTeamProjectCollection(RegisteredProjectCollection projectCollection, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsTeamProjectCollection(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public TfsConfigurationServer ConfigurationServer { get; internal set; } public override string Name { get; } public static Uri GetFullyQualifiedUriForName(string name); protected override object GetServiceInstance(Type serviceType, object serviceInstance); protected override object InitializeTeamFoundationObject(string fullName, object instance); } } Option 2 – Class – TfsConfigurationServer namespace Microsoft.TeamFoundation.Client { public class TfsConfigurationServer : TfsConnection { public TfsConfigurationServer(RegisteredConfigurationServer application); public TfsConfigurationServer(Uri uri); public TfsConfigurationServer(RegisteredConfigurationServer application, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials); public TfsConfigurationServer(Uri uri, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider); public TfsConfigurationServer(RegisteredConfigurationServer application, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public TfsConfigurationServer(Uri uri, ICredentials credentials, ICredentialsProvider credentialsProvider, IdentityDescriptor identityToImpersonate); public override CatalogNode CatalogNode { get; } public override string Name { get; } protected override object GetServiceInstance(Type serviceType, object serviceInstance); public TfsTeamProjectCollection GetTeamProjectCollection(Guid collectionId); protected override object InitializeTeamFoundationObject(string fullName, object instance); } }   Note – The TeamFoundationServer class is obsolete. Use the TfsTeamProjectCollection or TfsConfigurationServer classes to talk to a 2010 Team Foundation Server. In order to talk to a 2005 or 2008 Team Foundation Server use the TfsTeamProjectCollection class. 5. Sample code for programmatically connecting to TFS 2010 using the TFS 2010 API How do i know what the URI of my TFS server is, Note – You need to be have Team Project Collection view details permission in order to connect, expect to receive an authorization failure message if you do not have sufficient permissions. Case 1: Connect by Uri string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri)); Case 2: Connect by Uri, prompt for credentials string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), new UICredentialsProvider()); configurationServer.EnsureAuthenticated(); Case 3: Connect by Uri, custom credentials In order to use this method of connectivity you need to implement the interface ICredentailsProvider public class ConnectByImplementingCredentialsProvider : ICredentialsProvider { public ICredentials GetCredentials(Uri uri, ICredentials iCredentials) { return new NetworkCredential("UserName", "Password", "Domain"); } public void NotifyCredentialsAuthenticated(Uri uri) { throw new ApplicationException("Unable to authenticate"); } } And now consume the implementation of the interface, string _myUri = @"https://tfs.codeplex.com:443/tfs/tfs30"; ConnectByImplementingCredentialsProvider connect = new ConnectByImplementingCredentialsProvider(); ICredentials iCred = new NetworkCredential("UserName", "Password", "Domain"); connect.GetCredentials(new Uri(_myUri), iCred); TfsConfigurationServer configurationServer = TfsConfigurationServerFactory.GetConfigurationServer(new Uri(_myUri), connect); configurationServer.EnsureAuthenticated();   6. Programmatically query TFS 2010 using the TFS SDK for all Team Project Collections and retrieve all Team Projects and output the display name and description of each team project. CatalogNode catalogNode = configurationServer.CatalogNode; ReadOnlyCollection<CatalogNode> tpcNodes = catalogNode.QueryChildren( new Guid[] { CatalogResourceTypes.ProjectCollection }, false, CatalogQueryOptions.None); // tpc = Team Project Collection foreach (CatalogNode tpcNode in tpcNodes) { Guid tpcId = new Guid(tpcNode.Resource.Properties["InstanceId"]); TfsTeamProjectCollection tpc = configurationServer.GetTeamProjectCollection(tpcId); // Get catalog of tp = 'Team Projects' for the tpc = 'Team Project Collection' var tpNodes = tpcNode.QueryChildren( new Guid[] { CatalogResourceTypes.TeamProject }, false, CatalogQueryOptions.None); foreach (var p in tpNodes) { Debug.Write(Environment.NewLine + " Team Project : " + p.Resource.DisplayName + " - " + p.Resource.Description + Environment.NewLine); } }   Output   You can download a working demo that uses TFS SDK 2010 to programmatically connect to TFS 2010. Screen Shots of the attached demo application, Share this post :

    Read the article

  • May 20th Links: ASP.NET MVC, ASP.NET, .NET 4, VS 2010, Silverlight

    - by ScottGu
    Here is the latest in my link-listing series.  Also check out my VS 2010 and .NET 4 series and ASP.NET MVC 2 series for other on-going blog series I’m working on. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] ASP.NET MVC How to Localize an ASP.NET MVC Application: Michael Ceranski has a good blog post that describes how to localize ASP.NET MVC 2 applications. ASP.NET MVC with jTemplates Part 1 and Part 2: Steve Gentile has a nice two-part set of blog posts that demonstrate how to use the jTemplate and DataTable jQuery libraries to implement client-side data binding with ASP.NET MVC. CascadingDropDown jQuery Plugin for ASP.NET MVC: Raj Kaimal has a nice blog post that demonstrates how to implement a dynamically constructed cascading dropdownlist on the client using jQuery and ASP.NET MVC. How to Configure VS 2010 Code Coverage for ASP.NET MVC Unit Tests: Visual Studio enables you to calculate the “code coverage” of your unit tests.  This measures the percentage of code within your application that is exercised by your tests – and can give you a sense of how much test coverage you have.  Gunnar Peipman demonstrates how to configure this for ASP.NET MVC projects. Shrinkr URL Shortening Service Sample: A nice open source application and code sample built by Kazi Manzur that demonstrates how to implement a URL Shortening Services (like bit.ly) using ASP.NET MVC 2 and EF4.  More details here. Creating RSS Feeds in ASP.NET MVC: Damien Guard has a nice post that describes a cool new “FeedResult” class he created that makes it easy to publish and expose RSS feeds from within ASP.NET MVC sites. NoSQL with MongoDB, NoRM and ASP.NET MVC Part 1 and Part 2: Nice two-part blog series by Shiju Varghese on how to use MongoDB (a document database) with ASP.NET MVC.  If you are interested in document databases also make sure to check out the Raven DB project from Ayende. Using the FCKEditor with ASP.NET MVC: Quick blog post that describes how to use FCKEditor – an open source HTML Text Editor – with ASP.NET MVC. ASP.NET Replace Html.Encode Calls with the New HTML Encoding Syntax: Phil Haack has a good blog post that describes a useful way to quickly update your ASP.NET pages and ASP.NET MVC views to use the new <%: %> encoding syntax in ASP.NET 4.  I blogged about the new <%: %> syntax – it provides an easy and concise way to HTML encode content. Integrating Twitter into an ASP.NET Website using OAuth: Scott Mitchell has a nice article that describes how to take advantage of Twiter within an ASP.NET Website using the OAuth protocol – which is a simple, secure protocol for granting API access. Creating an ASP.NET report using VS 2010 Part 1, Part 2, and Part 3: Raj Kaimal has a nice three part set of blog posts that detail how to use SQL Server Reporting Services, ASP.NET 4 and VS 2010 to create a dynamic reporting solution. Three Hidden Extensibility Gems in ASP.NET 4: Phil Haack blogs about three obscure but useful extensibility points enabled with ASP.NET 4. .NET 4 Entity Framework 4 Video Series: Julie Lerman has a nice, free, 7-part video series on MSDN that walks through how to use the new EF4 capabilities with VS 2010 and .NET 4.  I’ll be covering EF4 in a blog series that I’m going to start shortly as well. Getting Lazy with System.Lazy: System.Lazy and System.Lazy<T> are new features in .NET 4 that provide a way to create objects that may need to perform time consuming operations and defer the execution of the operation until it is needed.  Derik Whittaker has a nice write-up that describes how to use it. LINQ to Twitter: Nifty open source library on Codeplex that enables you to use LINQ syntax to query Twitter. Visual Studio 2010 Using Intellitrace in VS 2010: Chris Koenig has a nice 10 minute video that demonstrates how to use the new Intellitrace features of VS 2010 to enable DVR playback of your debug sessions. Make the VS 2010 IDE Colors look like VS 2008: Scott Hanselman has a nice blog post that covers the Visual Studio Color Theme Editor extension – which allows you to customize the VS 2010 IDE however you want. How to understand your code using Dependency Graphs, Sequence Diagrams, and the Architecture Explorer: Jennifer Marsman has a nice blog post describes how to take advantage of some of the new architecture features within VS 2010 to quickly analyze applications and legacy code-bases. How to maintain control of your code using Layer Diagrams: Another great blog post by Jennifer Marsman that demonstrates how to setup a “layer diagram” within VS 2010 to enforce clean layering within your applications.  This enables you to enforce a compiler error if someone inadvertently violates a layer design rule. Collapse Selection in Solution Explorer Extension: Useful VS 2010 extension that enables you to quickly collapse “child nodes” within the Visual Studio Solution Explorer.  If you have deeply nested project structures this extension is useful. Silverlight and Windows Phone 7 Building a Simple Windows Phone 7 Application: A nice tutorial blog post that demonstrates how to take advantage of Expression Blend to create an animated Windows Phone 7 application. If you haven’t checked out my Windows Phone 7 Twitter Tutorial I also recommend reading that. Hope this helps, Scott P.S. If you haven’t already, check out this month’s "Find a Hoster” page on the www.asp.net website to learn about great (and very inexpensive) ASP.NET hosting offers.

    Read the article

< Previous Page | 670 671 672 673 674 675 676 677 678 679 680 681  | Next Page >