Search Results

Search found 36186 results on 1448 pages for 'sql 11'.

Page 790/1448 | < Previous Page | 786 787 788 789 790 791 792 793 794 795 796 797  | Next Page >

  • Achieve Spatial Data Support in SSIS

    Overview SQL Server 2008 introduced a new category of datatypes known as spatial datatypes which stores spatial information. The new spatial datatypes are geography and geometry. SQL Server Management Studio comes with good good support for these spatial data ... [Read Full Article]

    Read the article

  • Copy Table to Another Database

    - by Derek Dieter
    There are few methods of copying a table to another database, depending on your situation. Same SQL Server Instance If trying to copy a table to a database that is on the same instance of SQL Server, The easiest solution is to use a SELECT INTO while using the fully qualifed database names.SELECT * INTO Database2.dbo.TargetTable FROM Database1.dbo.SourceTableThis will [...]

    Read the article

  • SSMS Tools Pack 2.0

    If you work with SSMS, you’ll know how frustrating it can be when tasks you perform every day aren’t part of the core features. Malden Prajdic certainly did, which is why he developed his free SSMS Tools Pack. Now on its second version, Grant Fritchey explains the functionality of this great free plugin. The Future of SQL Server MonitoringMonitor wherever, whenever with Red Gate's SQL Monitor. See it live in action now.

    Read the article

  • Partitioned Tables, Indexes and Execution Plans: a Cautionary Tale

    Table partitioning is a blessing in that it makes large tables that have varying access patterns more scalable and manageable, but it is a mixed blessing. It is important to understand the down-side before using table partitioning. "SQL Backup Pro 7 improves on an already wonderful product" - Don KolendaHave you tried version 7 yet? Get faster, smaller, fully verified backups. Download a free trial of SQL Backup Pro 7.

    Read the article

  • Init Replication From Backup

    One of the great features with SQL Replication is the ability to initialize a subscription from backup instead of from a snapshot. The official use for this is to take a database backup and restore it to a subscriber then replicate any additional changes to the backup. New! SQL Monitor 3.0 Red Gate's multi-server performance monitoring and alerting tool gets results from Day One.Simple to install and easy to use – download a free trial today.

    Read the article

  • Test Preparation Materials

    - by GavinPayneUK
    I wrote an article on my personal blog a few weeks ago about my preparation for my first Microsoft exam, 70-432.  ( link )  Since then I’ve been reading and demo’ing all the relevant features of SQL Server in the hope that if I get questions on them I’ll be prepared. I’ve learnt a few things in the last couple of weeks, some good, some bad which I’ll now share. The first thing I found is that learning about how SQL Server works is fun and interesting, far better than spending an evening...(read more)

    Read the article

  • Fundamentals of Vendor Management

    Creating and maintaining mutually beneficial relationships with external vendors is one of the pillars of good project management. Dwain Camps goes through what to expect and allow in your client-vendor relationship during the various stages of a given project to ensure its success and secure that all important win-win outcome. Save 45% on our top SQL Server database administration tools. Together they make up the SQL DBA Bundle, which supports your core tasks and helps your day run smoothly. Download a free trial now.

    Read the article

  • The Perils of Running Database Repair

    In a perfect world everyone has the right backups to be able to recover within the downtime and data-loss service level agreements when accidental data loss or corruption occurs. Unfortunately we don’t live in a perfect world and so many people find that they don’t have the backups they need to recover when faced with corruption. What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • Cerebrata helps developers master Microsoft Azure with launch of new Just Azure resource

    Just Azure, a new site from Cerebrata (part of Red Gate), provides essential technical resources and educational articles to support the Microsoft community in navigating and understanding the rapidly evolving Azure platform. Get to grips with SQL Server replicationIn this new eBook Sebastian Meine gives a hands-on introduction to SQL Server replication, including implementation and security. Download free ebook now.

    Read the article

  • Just what comes in the box?

    - by GavinPayneUK
    As a SQL Server architect/consultant/advisor/advocate/you name it, one of my roles is to advise clients on the best solution for their requirements and their budget .  That last piece may seem like common sense but people often forget with SQL Server where the dividing line between the Standard and Enterprise editions sits.  Knowing what comes in each edition, or more commonly what doesn’t come in Standard Edition especially in the area of high availability, is critical for me when I’m...(read more)

    Read the article

  • New and Improved Functionality in SSIS 2008

    While it does not bring revolutionary changes, the release of SQL Server 2008 delivers functionality, scalability, and performance improvements to SQL Server Integration Services (SSIS). Here is a comprehensive listing of new and enhanced features of SSIS with a short description of each.

    Read the article

  • TSQL Challenge 79 - Finding the Islands

    The challenge is to find the Islands(gaps) in sequential dates. You need to write a query to identify continuous intervals from the start date and end date. What are your servers really trying to tell you? Find out with new SQL Monitor 3.0, an easy-to-use tool built for no-nonsense database professionals.For effortless insights into SQL Server, download a free trial today.

    Read the article

  • Exploring Semantic Search Key Term Relevance

    SQL Server's 'Semantic Search' feature seemed an exciting feature when first shown. Was it really true that Microsoft had come up with a system to rival the industry-leaders, one that could extract the contextual meaning of terms in text, or automatically categorise the subject matter of text? On first inspection, it seems unlikely. Schedule Azure backupsRed Gate’s Cloud Services makes it simple to create and schedule backups of your SQL Azure databases to Azure blob storage or Amazon S3. Try it for free today.

    Read the article

  • STOP! Wrong Server!

    - by merrillaldrich
    Some time ago I wrote a post about old-time T-SQL flow control . Part of the idea was to illustrate how to make scripts abort on errors, which is surprisingly convoluted in T-SQL. Today I have a more specific example: how can I check that we are really about to execute in the right database on the right server, and halt if not. Perhaps you’ve connected to the wrong server, or the database you intend to modify doesn’t have the name your script expected. “USE database” is wonderful, but what if it...(read more)

    Read the article

  • Execute As

    Learn how you can use Execute As in a stored procedure to control permissions. This article includes an example that limits access to encryption routines for users. Optimize SQL Server performance“With SQL Monitor, we can be proactive in our optimization process, instead of waiting until a customer reports a problem,” John Trumbul, Sr. Software Engineer. Optimize your servers with a free trial.

    Read the article

  • --log-slave-updates is OFF but some updates are still logged to the slave binary log?

    - by quanta
    MySQL version 5.5.14 According to the document, by the default, slave does not log to its binary log any updates that are received from a master server. Here are my config. on the slave: # egrep 'bin|slave' /etc/my.cnf relay-log=mysqld-relay-bin log-bin = /var/log/mysql/mysql-bin binlog-format=MIXED sync_binlog = 1 log-bin-trust-function-creators = 1 mysql> show global variables like 'log_slave%'; +-------------------+-------+ | Variable_name | Value | +-------------------+-------+ | log_slave_updates | OFF | +-------------------+-------+ 1 row in set (0.01 sec) mysql> select @@log_slave_updates; +---------------------+ | @@log_slave_updates | +---------------------+ | 0 | +---------------------+ 1 row in set (0.00 sec) but slave still logs the some changes to its binary logs, let's see the file size: -rw-rw---- 1 mysql mysql 37M Apr 1 01:00 /var/log/mysql/mysql-bin.001256 -rw-rw---- 1 mysql mysql 25M Apr 2 01:00 /var/log/mysql/mysql-bin.001257 -rw-rw---- 1 mysql mysql 46M Apr 3 01:00 /var/log/mysql/mysql-bin.001258 -rw-rw---- 1 mysql mysql 115M Apr 4 01:00 /var/log/mysql/mysql-bin.001259 -rw-rw---- 1 mysql mysql 105M Apr 4 18:54 /var/log/mysql/mysql-bin.001260 and the sample query when reading these binary files with mysqlbinlog utility: #120404 19:08:57 server id 3 end_log_pos 110324763 Query thread_id=382435 exec_time=0 error_code=0 SET TIMESTAMP=1333541337/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'118212' COLLATE 'utf8_general_ci')) /*!*/; # at 110324763 Did I miss something? Reply to @RolandoMySQLDBA: If replication brought this over, then the same query has to be in the relay logs. Please go find the relay log that has the INSERT query with the same TIMESTAMP (1333541337). There is no such query with the same TIMESTAMP in the relay logs. If you cannot find it in the relay logs, then look and see if Infobright is posting the INSERT query. In that instance, the INSERT should be recorded in the binary logs of the Slave. Looking more deeply into the binary logs, I see that almost of the queries are CREATE/INSERT/UPDATE/DROP "temporary" tables, something like this: # at 123873315 #120405 0:42:04 server id 3 end_log_pos 123873618 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; SET @@session.pseudo_thread_id=395373/*!*/; CREATE TEMPORARY TABLE `norep_tmpcampaign` ( `campaignid` INTEGER(11) NOT NULL DEFAULT '0' , `status` INTEGER(11) NOT NULL DEFAULT '0' , `updated` DATETIME, KEY `campaignid` (`campaignid`) )ENGINE=MEMORY /*!*/; # at 123873618 #120405 0:42:04 server id 3 end_log_pos 123873755 Query thread_id=395373 exec_time=0 error_code=0 SET TIMESTAMP=1333561324/*!*/; DROP TABLE IF EXISTS `norep_tmpcampaign1` /* generated by server */ "temporary" here means that they are dropped after calculation is done. I also tells the slave not to replicate any statement matches the norep_ wildcard pattern: replicate-wild-ignore-table=%.norep_% But there is an exception table in the binary logs: # at 123828094 #120405 0:37:21 server id 3 end_log_pos 123828495 Query thread_id=395209 exec_time=0 error_code=0 SET TIMESTAMP=1333561041/*!*/; INSERT INTO sessions (SessionId, ApplicationName, Created, Expires, LockDate, LockId, Timeout, Locked, SessionItems, Fla gs) Values('pgv2exo4y4vo4ccz44vwznu0', '/', '2012-04-05 00:37:21', '2012-04-05 00:57:21', '2012-04-05 00:37:21', 0, 20, 0, 'AwAAAP////8IdXNlcm5hbWUGdXNlcmlkCHBlcm1pdGlkAgAAAAQAAAAGAAAAAQABAAEA', 0) /*!*/; Description: mysql> desc reportingdb.sessions; +-----------------+------------------+------+-----+---------------------+-------+ | Field | Type | Null | Key | Default | Extra | +-----------------+------------------+------+-----+---------------------+-------+ | SessionId | varchar(64) | NO | PRI | | | | ApplicationName | varchar(255) | NO | | | | | Created | timestamp | NO | | 0000-00-00 00:00:00 | | | Expires | timestamp | NO | | 0000-00-00 00:00:00 | | | LockDate | timestamp | NO | | 0000-00-00 00:00:00 | | | LockId | int(11) unsigned | NO | | NULL | | | Timeout | int(11) unsigned | NO | | NULL | | | Locked | bit(1) | NO | | NULL | | | SessionItems | varchar(255) | YES | | NULL | | | Flags | int(11) | NO | | NULL | | +-----------------+------------------+------+-----+---------------------+-------+ I'm sure all these queries are posting by MySQL, not Infobright: $ mysql-ib -u root -p Enter password: Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 48971 Server version: 5.1.40 build number (revision)=IB_4.0.5_r15240_15370(ice) (static) Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. mysql> select * from information_schema.tables where table_name='sessions'; Empty set (0.02 sec) I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs on slave: # at 311664029 #120405 0:15:23 server id 1 end_log_pos 311664006 Query thread_id=10458250 exec_time=0 error_code=0 use testuser/*!*/; SET TIMESTAMP=1333559723/*!*/; update users set email2='[email protected]' where id=22 /*!*/; Pay attention to the server id, in the relay logs, server id is master's (1) and in the binary log, server id is slave's (3 in this case). Reply to @RolandoMySQLDBA: Thu Apr 5 10:06:00 ICT 2012 Run CREATE DATABASE quantatest; on the Master now, please. Tell me if CREATE DATABASE quantatest; showed up in the Slave's Binary Logs. As I said above: I've been trying some INSERT/UPDATE queries with testing tables on the master, it is copied to the relay logs, not binary logs and you can guess, IO thread copied it to the relay logs, not binary logs. #120405 10:07:25 server id 1 end_log_pos 347573819 Query thread_id=10480775 exec_time=0 error_code=0 SET TIMESTAMP=1333595245/*!*/; /*!\C latin1 *//*!*/; SET @@session.character_set_client=8,@@session.collation_connection=8,@@session.collation_server=8/*!*/; create database quantatest /*!*/; The question must probably change to: why some update queries still logged to the slave binary logs althrough --log-slave-updates is disabled? Where they come from? Here are few last: /*!*/; # at 27492197 #120405 10:12:45 server id 3 end_log_pos 27492370 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; CREATE TEMPORARY TABLE norep_SplitValues ( value VARCHAR(1000) NOT NULL ) ENGINE=MEMORY /*!*/; # at 27492370 #120405 10:12:45 server id 3 end_log_pos 27492445 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; BEGIN /*!*/; # at 27492445 #120405 10:12:45 server id 3 end_log_pos 27492619 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; INSERT INTO norep_SplitValues VALUES ( NAME_CONST('cur_string',_utf8'119577' COLLATE 'utf8_general_ci')) /*!*/; # at 27492619 #120405 10:12:45 server id 3 end_log_pos 27492695 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595565/*!*/; COMMIT /*!*/; # at 27492918 #120405 10:12:46 server id 3 end_log_pos 27493115 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; SELECT `reportingdb`.`selfserving_get_locationad`(_utf8'3' COLLATE 'utf8_general_ci',_utf8'' COLLATE 'utf8_general_ci') /*!*/; # at 27493199 #120405 10:12:46 server id 3 end_log_pos 27493353 Query thread_id=410353 exec_time=0 error_code=0 SET TIMESTAMP=1333595566/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; DROP TEMPORARY TABLE IF EXISTS `norep_SplitValues` /* generated by server */ /*!*/;

    Read the article

  • Root certificate authority works windows/linux but not mac osx - (malformed)

    - by AKwhat
    I have created a self-signed root certificate authority which if I install onto windows, linux, or even using the certificate store in firefox (windows/linux/macosx) will work perfectly with my terminating proxy. I have installed it into the system keychain and I have set the certificate to always trust. Within the chrome browser details it says "The certificate that Chrome received during this connection attempt is not formatted correctly, so Chrome cannot use it to protect your information. Error type: Malformed certificate" I used this code to create the certificate: openssl genrsa -des3 -passout pass:***** -out private/server.key 4096 openssl req -batch -passin pass:***** -new -x509 -nodes -sha1 -days 3600 -key private/server.key -out server.crt -config ../openssl.cnf If the issue is NOT that it is malformed (because it works everywhere else) then what else could it be? Am I installing it incorrectly? To be clear: Within the windows/linux OS, all browsers work perfectly. Within mac only firefox works if it uses its internal certificate store and not the keychain. It's the keychain method of importing a certificate that causes the issue. Thus, all browsers using the keychain will not work. Root CA Cert: -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE----- Intermediate CA Cert: Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: C=*****, ST=*******, L=******, O=*******, CN=******/emailAddress=****** Validity Not Before: May 21 13:57:32 2014 GMT Not After : Jun 20 13:57:32 2014 GMT Subject: C=*****, ST=********, O=*******, CN=*******/emailAddress=******* Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (4096 bit) Modulus (4096 bit): 00:e7:2d:75:38:23:02:8e:b9:8d:2f:33:4c:2a:11: 6d:d4:f8:29:ab:f3:fc:12:00:0f:bb:34:ec:35:ed: a5:38:10:1e:f3:54:c2:69:ae:3b:22:c0:0d:00:97: 08:da:b9:c9:32:c0:c6:b1:8b:22:7e:53:ea:69:e2: 6d:0f:bd:f5:96:b2:d0:0d:b2:db:07:ba:f1:ce:53: 8a:5e:e0:22:ce:3e:36:ed:51:63:21:e7:45:ad:f9: 4d:9b:8f:7f:33:4c:ed:fc:a6:ac:16:70:f5:96:36: 37:c8:65:47:d1:d3:12:70:3e:8d:2f:fb:9f:94:e0: c9:5f:d0:8c:30:e0:04:23:38:22:e5:d9:84:15:b8: 31:e7:a7:28:51:b8:7f:01:49:fb:88:e9:6c:93:0e: 63:eb:66:2b:b4:a0:f0:31:33:8b:b4:04:84:1f:9e: d5:ed:23:cc:bf:9b:8e:be:9a:5c:03:d6:4f:1a:6f: 2d:8f:47:60:6c:89:c5:f0:06:df:ac:cb:26:f8:1a: 48:52:5e:51:a0:47:6a:30:e8:bc:88:8b:fd:bb:6b: c9:03:db:c2:46:86:c0:c5:a5:45:5b:a9:a3:61:35: 37:e9:fc:a1:7b:ae:71:3a:5c:9c:52:84:dd:b2:86: b3:2e:2e:7a:5b:e1:40:34:4a:46:f0:f8:43:26:58: 30:87:f9:c6:c9:bc:b4:73:8b:fc:08:13:33:cc:d0: b7:8a:31:e9:38:a3:a9:cc:01:e2:d4:c2:a5:c1:55: 52:72:52:2b:06:a3:36:30:0c:5c:29:1a:dd:14:93: 2b:9d:bf:ac:c1:2d:cd:3f:89:1f:bc:ad:a4:f2:bd: 81:77:a9:f4:f0:b9:50:9e:fb:f5:da:ee:4e:b7:66: e5:ab:d1:00:74:29:6f:01:28:32:ea:7d:3f:b3:d7: 97:f2:60:63:41:0f:30:6a:aa:74:f4:63:4f:26:7b: 71:ed:57:f1:d4:99:72:61:f4:69:ad:31:82:76:67: 21:e1:32:2f:e8:46:d3:28:61:b1:10:df:4c:02:e5: d3:cc:22:30:a4:bb:81:10:dc:7d:49:94:b2:02:2d: 96:7f:e5:61:fa:6b:bd:22:21:55:97:82:18:4e:b5: a0:67:2b:57:93:1c:ef:e5:d2:fb:52:79:95:13:11: 20:06:8c:fb:e7:0b:fd:96:08:eb:17:e6:5b:b5:a0: 8d:dd:22:63:99:af:ad:ce:8c:76:14:9a:31:55:d7: 95:ea:ff:10:6f:7c:9c:21:00:5e:be:df:b0:87:75: 5d:a6:87:ca:18:94:e7:6a:15:fe:27:dd:28:5e:c0: ad:d2:91:d3:2d:8e:c3:c0:9f:fb:ff:c0:36:7e:e2: d7:bc:41 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Subject Alternative Name: DNS:localhost, DNS:dropbox.com, DNS:*.dropbox.com, DNS:filedropper.com, DNS:*.filedropper.com X509v3 Subject Key Identifier: F3:E5:38:5B:3C:AF:1C:73:C1:4C:7D:8B:C8:A1:03:82:65:0D:FF:45 X509v3 Authority Key Identifier: keyid:2B:37:39:7B:9F:45:14:FE:F8:BC:CA:E0:6E:B4:5F:D6:1A:2B:D7:B0 DirName:/C=****/ST=******/L=*******/O=*******/CN=******/emailAddress=******* serial:EE:8C:A3:B4:40:90:B0:62 X509v3 Basic Constraints: CA:TRUE Signature Algorithm: sha1WithRSAEncryption 46:2a:2c:e0:66:e3:fa:c6:80:b6:81:e7:db:c3:29:ab:e7:1c: f0:d9:a0:b7:a9:57:8c:81:3e:30:8f:7d:ef:f7:ed:3c:5f:1e: a5:f6:ae:09:ab:5e:63:b4:f6:d6:b6:ac:1c:a0:ec:10:19:ce: dd:5a:62:06:b4:88:5a:57:26:81:8e:38:b9:0f:26:cd:d9:36: 83:52:ec:df:f4:63:ce:a1:ba:d4:1c:ec:b6:66:ed:f0:32:0e: 25:87:79:fa:95:ee:0f:a0:c6:2d:8f:e9:fb:11:de:cf:26:fa: 59:fa:bd:0b:74:76:a6:5d:41:0d:cd:35:4e:ca:80:58:2a:a8: 5d:e4:d8:cf:ef:92:8d:52:f9:f2:bf:65:50:da:a8:10:1b:5e: 50:a7:7e:57:7b:94:7f:5c:74:2e:80:ae:1e:24:5f:0b:7b:7e: 19:b6:b5:bd:9d:46:5a:e8:47:43:aa:51:b3:4b:3f:12:df:7f: ef:65:21:85:c2:f6:83:84:d0:8d:8b:d9:6d:a8:f9:11:d4:65: 7d:8f:28:22:3c:34:bb:99:4e:14:89:45:a4:62:ed:52:b1:64: 9a:fd:08:cd:ff:ca:9e:3b:51:81:33:e6:37:aa:cb:76:01:90: d1:39:6f:6a:8b:2d:f5:07:f8:f4:2a:ce:01:37:ba:4b:7f:d4: 62:d7:d6:66:b8:78:ad:0b:23:b6:2e:b0:9a:fc:0f:8c:4c:29: 86:a0:bc:33:71:e5:7f:aa:3e:0e:ca:02:e1:f6:88:f0:ff:a2: 04:5a:f5:d7:fe:7d:49:0a:d2:63:9c:24:ed:02:c7:4d:63:e6: 0c:e1:04:cd:a4:bf:a8:31:d3:10:db:b4:71:48:f7:1a:1b:d9: eb:a7:2e:26:00:38:bd:a8:96:b4:83:09:c9:3d:79:90:e1:61: 2c:fc:a0:2c:6b:7d:46:a8:d7:17:7f:ae:60:79:c1:b6:5c:f9: 3c:84:64:7b:7f:db:e9:f1:55:04:6e:b5:d3:5e:d3:e3:13:29: 3f:0b:03:f2:d7:a8:30:02:e1:12:f4:ae:61:6f:f5:4b:e9:ed: 1d:33:af:cd:9b:43:42:35:1a:d4:f6:b9:fb:bf:c9:8d:6c:30: 25:33:43:49:32:43:a5:a8:d8:82:ef:b0:a6:bd:8b:fb:b6:ed: 72:fd:9a:8f:00:3b:97:a3:35:a4:ad:26:2f:a9:7d:74:08:82: 26:71:40:f9:9b:01:14:2e:82:fb:2f:c0:11:51:00:51:07:f9: e1:f6:1f:13:6e:03:ee:d7:85:c2:64:ce:54:3f:15:d4:d7:92: 5f:87:aa:1e:b4:df:51:77:12:04:d2:a5:59:b3:26:87:79:ce: ee:be:60:4e:87:20:5c:7f -----BEGIN CERTIFICATE----- **some base64 stuff** -----END CERTIFICATE-----

    Read the article

  • High Server Load cannot figure out why

    - by Tim Bolton
    My server is currently running CentOS 5.2, with WHM 11.34. Currently, we're at 6.43 to 12 for a load average. The sites that we're hosting are taking a lot time to respond and resolve. top doesn't show anything out of the ordinary and iftop doesn't show a lot of traffic. We have many resellers, and some not so good at writing code, how can we find the culprit? vmstat output: vmstat procs -----------memory---------- ---swap-- -----io---- --system-- -----cpu------ r b swpd free buff cache si so bi bo in cs us sy id wa st 0 2 84 78684 154916 1021080 0 0 72 274 0 14 6 3 80 12 0 top output (ordered by %CPU) top - 21:44:43 up 5 days, 10:39, 3 users, load average: 3.36, 4.18, 4.73 Tasks: 222 total, 3 running, 219 sleeping, 0 stopped, 0 zombie Cpu(s): 5.8%us, 2.3%sy, 0.2%ni, 79.6%id, 11.8%wa, 0.0%hi, 0.2%si, 0.0%st Mem: 2074580k total, 1863044k used, 211536k free, 174828k buffers Swap: 2040212k total, 84k used, 2040128k free, 987604k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 15930 mysql 15 0 138m 46m 4380 S 4 2.3 1:45.87 mysqld 21772 igniteth 17 0 23200 7152 3932 R 4 0.3 0:00.02 php 1586 root 10 -5 0 0 0 S 2 0.0 11:45.19 kjournald 21759 root 15 0 2416 1024 732 R 2 0.0 0:00.01 top 1 root 15 0 2156 648 560 S 0 0.0 0:26.31 init 2 root RT 0 0 0 0 S 0 0.0 0:00.35 migration/0 3 root 34 19 0 0 0 S 0 0.0 0:00.32 ksoftirqd/0 4 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/0 5 root RT 0 0 0 0 S 0 0.0 0:02.00 migration/1 6 root 34 19 0 0 0 S 0 0.0 0:00.11 ksoftirqd/1 7 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/1 8 root RT 0 0 0 0 S 0 0.0 0:01.29 migration/2 9 root 34 19 0 0 0 S 0 0.0 0:00.26 ksoftirqd/2 10 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/2 11 root RT 0 0 0 0 S 0 0.0 0:00.90 migration/3 12 root 34 19 0 0 0 R 0 0.0 0:00.20 ksoftirqd/3 13 root RT 0 0 0 0 S 0 0.0 0:00.00 watchdog/3 top output (ordered by CPU time) top - 21:46:12 up 5 days, 10:41, 3 users, load average: 2.88, 3.82, 4.55 Tasks: 217 total, 1 running, 216 sleeping, 0 stopped, 0 zombie Cpu(s): 3.7%us, 2.0%sy, 2.0%ni, 67.2%id, 25.0%wa, 0.0%hi, 0.1%si, 0.0%st Mem: 2074580k total, 1959516k used, 115064k free, 183116k buffers Swap: 2040212k total, 84k used, 2040128k free, 1090308k cached PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ TIME COMMAND 32367 root 16 0 215m 212m 1548 S 0 10.5 62:03.63 62:03 tailwatchd 1586 root 10 -5 0 0 0 S 0 0.0 11:45.27 11:45 kjournald 1576 root 10 -5 0 0 0 S 0 0.0 2:37.86 2:37 kjournald 27722 root 16 0 2556 1184 800 S 0 0.1 1:48.94 1:48 top 15930 mysql 15 0 138m 46m 4380 S 4 2.3 1:48.63 1:48 mysqld 2932 root 34 19 0 0 0 S 0 0.0 1:41.05 1:41 kipmi0 226 root 10 -5 0 0 0 S 0 0.0 1:34.33 1:34 kswapd0 2671 named 25 0 74688 7400 2116 S 0 0.4 1:23.58 1:23 named 3229 root 15 0 10300 3348 2724 S 0 0.2 0:40.85 0:40 sshd 1580 root 10 -5 0 0 0 S 0 0.0 0:30.62 0:30 kjournald 1 root 17 0 2156 648 560 S 0 0.0 0:26.32 0:26 init 2616 root 15 0 1816 576 480 S 0 0.0 0:23.50 0:23 syslogd 1584 root 10 -5 0 0 0 S 0 0.0 0:18.67 0:18 kjournald 4342 root 34 19 27692 11m 2116 S 0 0.5 0:18.23 0:18 yum-updatesd 8044 bollingp 15 0 3456 2036 740 S 1 0.1 0:15.56 0:15 imapd 26 root 10 -5 0 0 0 S 0 0.0 0:14.18 0:14 kblockd/1 7989 gmailsit 16 0 3196 1748 736 S 0 0.1 0:10.43 0:10 imapd iostat -xtk 1 10 output [root@server1 tmp]# iostat -xtk 1 10 Linux 2.6.18-53.el5 12/18/2012 Time: 09:51:06 PM avg-cpu: %user %nice %system %iowait %steal %idle 5.83 0.19 2.53 11.85 0.00 79.60 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 1.37 118.83 18.70 54.27 131.47 692.72 22.59 4.90 67.19 3.10 22.59 sdb 0.35 39.33 20.33 61.43 158.79 403.22 13.75 5.23 63.93 3.77 30.80 Time: 09:51:07 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.50 0.00 0.50 24.00 0.00 74.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 25.00 2.00 2.00 128.00 108.00 118.00 0.03 7.25 4.00 1.60 sdb 0.00 16.00 41.00 145.00 200.00 668.00 9.33 107.92 272.72 5.38 100.10 Time: 09:51:08 PM avg-cpu: %user %nice %system %iowait %steal %idle 2.00 0.00 1.50 29.50 0.00 67.00 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 95.00 3.00 33.00 12.00 480.00 27.33 0.07 1.72 1.31 4.70 sdb 0.00 14.00 1.00 228.00 4.00 960.00 8.42 143.49 568.01 4.37 100.10 Time: 09:51:09 PM avg-cpu: %user %nice %system %iowait %steal %idle 13.28 0.00 2.76 21.30 0.00 62.66 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 21.00 1.00 19.00 16.00 192.00 20.80 0.06 3.55 1.30 2.60 sdb 0.00 36.00 28.00 181.00 124.00 884.00 9.65 121.16 617.31 4.79 100.10 Time: 09:51:10 PM avg-cpu: %user %nice %system %iowait %steal %idle 4.74 0.00 1.50 25.19 0.00 68.58 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 20.00 3.00 15.00 12.00 136.00 16.44 0.17 7.11 3.11 5.60 sdb 0.00 0.00 103.00 60.00 544.00 248.00 9.72 52.35 545.23 6.14 100.10 Time: 09:51:11 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.24 0.00 1.24 25.31 0.00 72.21 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 75.00 4.00 28.00 16.00 416.00 27.00 0.08 3.72 2.03 6.50 sdb 2.00 9.00 124.00 17.00 616.00 104.00 10.21 3.73 213.73 7.10 100.10 Time: 09:51:12 PM avg-cpu: %user %nice %system %iowait %steal %idle 1.00 0.00 0.75 24.31 0.00 73.93 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 24.00 1.00 9.00 4.00 132.00 27.20 0.01 1.20 1.10 1.10 sdb 4.00 40.00 103.00 48.00 528.00 212.00 9.80 105.21 104.32 6.64 100.20 Time: 09:51:13 PM avg-cpu: %user %nice %system %iowait %steal %idle 2.50 0.00 1.75 23.25 0.00 72.50 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 125.74 3.96 46.53 15.84 689.11 27.92 0.20 4.06 2.41 12.18 sdb 2.97 0.00 91.09 84.16 419.80 471.29 10.17 85.85 590.78 5.66 99.11 Time: 09:51:14 PM avg-cpu: %user %nice %system %iowait %steal %idle 0.75 0.00 0.50 24.94 0.00 73.82 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 88.00 1.00 7.00 4.00 380.00 96.00 0.04 4.38 3.00 2.40 sdb 3.00 7.00 111.00 44.00 540.00 208.00 9.65 18.58 581.79 6.46 100.10 Time: 09:51:15 PM avg-cpu: %user %nice %system %iowait %steal %idle 11.03 0.00 3.26 26.57 0.00 59.15 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await svctm %util sda 0.00 145.00 7.00 53.00 28.00 792.00 27.33 0.15 2.50 1.55 9.30 sdb 1.00 0.00 155.00 0.00 800.00 0.00 10.32 2.85 18.63 6.46 100.10 [root@server1 tmp]# MySQL Show Full Processlist mysql> show full processlist; +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Id | User | Host | db | Command | Time | State | Info | +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | 1 | DB_USER_ONE | localhost | DB_ONE | Query | 3 | waiting for handler insert | INSERT DELAYED INTO defers (mailtime,msgid,email,transport_method,message,host,ip,router,deliveryuser,deliverydomain) VALUES(FROM_UNIXTIME('1355879748'),'1TivwL-0003y8-8l','[email protected]','remote_smtp','SMTP error from remote mail server after initial connection: host mx1.mail.tw.yahoo.com [203.188.197.119]: 421 4.7.0 [TS01] Messages from 75.125.90.146 temporarily deferred due to user complaints - 4.16.55.1; see http://postmaster.yahoo.com/421-ts01.html','mx1.mail.tw.yahoo.com','203.188.197.119','lookuphost','','') | | 2 | DELAYED | localhost | DB_ONE | Delayed insert | 52 | insert | | | 3 | DELAYED | localhost | DB_ONE | Delayed insert | 68 | insert | | | 911 | DELAYED | localhost | DB_ONE | Delayed insert | 99 | Waiting for INSERT | | | 993 | DB_USER_TWO | localhost | DB_TWO | Sleep | 832 | | NULL | | 994 | DB_USER_ONE | localhost | DB_ONE | Query | 185 | Locked | delete from failures where FROM_UNIXTIME(UNIX_TIMESTAMP(NOW())-1296000) > mailtime | | 1102 | DB_USER_THREE | localhost | DB_THREE | Query | 29 | NULL | commit | | 1249 | DB_USER_FOUR | localhost | DB_FOUR | Query | 13 | NULL | commit | | 1263 | root | localhost | DB_FIVE | Query | 0 | NULL | show full processlist | | 1264 | DB_USER_SIX | localhost | DB_SIX | Query | 3 | NULL | commit | +------+---------------+-----------+-----------------------+----------------+------+----------------------------+----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ 10 rows in set (0.00 sec)

    Read the article

  • Puppet's automatically generated certificates failing

    - by gparent
    I am running a default configuration of Puppet on Debian Squeeze 6.0.4. The server's FQDN is master.example.com. The client's FQDN is client.example.com. I am able to contact the puppet master and send a CSR. I sign it using puppetca -sa but the client will still not connect. Date of both machines is within 2 seconds of Tue Apr 3 20:59:00 UTC 2012 as I wrote this sentence. This is what appears in /var/log/syslog: Apr 3 17:03:52 localhost puppet-agent[18653]: Reopening log files Apr 3 17:03:52 localhost puppet-agent[18653]: Starting Puppet client version 2.6.2 Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog from remote server: SSL_connect returned=1 errno=0 state=SSLv3 read server certificate B: certificate verify failed Apr 3 17:03:53 localhost puppet-agent[18653]: Using cached catalog Apr 3 17:03:53 localhost puppet-agent[18653]: Could not retrieve catalog; skipping run Here is some interesting output: OpenSSL client test: client:~# openssl s_client -host master.example.com -port 8140 -cert /var/lib/puppet/ssl/certs/client.example.com.pem -key /var/lib/puppet/ssl/private_keys/client.example.com.pem -CAfile /var/lib/puppet/ssl/certs/ca.pem CONNECTED(00000003) depth=1 /CN=Puppet CA: master.example.com verify return:1 depth=0 /CN=master.example.com verify error:num=7:certificate signature failure verify return:1 depth=0 /CN=master.example.com verify return:1 18509:error:1409441B:SSL routines:SSL3_READ_BYTES:tlsv1 alert decrypt error:s3_pkt.c:1102:SSL alert number 51 18509:error:140790E5:SSL routines:SSL23_WRITE:ssl handshake failure:s23_lib.c:188: client:~# master's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/master.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 2 (0x2) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:28 2012 GMT Not After : Apr 2 20:01:28 2017 GMT Subject: CN=master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:a9:c1:f9:4c:cd:0f:68:84:7b:f4:93:16:20:44: 7a:2b:05:8e:57:31:05:8e:9c:c8:08:68:73:71:39: c1:86:6a:59:93:6e:53:aa:43:11:83:5b:2d:8c:7d: 54:05:65:c1:e1:0e:94:4a:f0:86:58:c3:3d:4f:f3: 7d:bd:8e:29:58:a6:36:f4:3e:b2:61:ec:53:b5:38: 8e:84:ac:5f:a3:e3:8c:39:bd:cf:4f:3c:ff:a9:65: 09:66:3c:ba:10:14:69:d5:07:57:06:28:02:37:be: 03:82:fb:90:8b:7d:b3:a5:33:7b:9b:3a:42:51:12: b3:ac:dd:d5:58:69:a9:8a:ed Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 7b:2c:4f:c2:76:38:ab:03:7f:c6:54:d9:78:1d:ab:6c:45:ab: 47:02:c7:fd:45:4e:ab:b5:b6:d9:a7:df:44:72:55:0c:a5:d0: 86:58:14:ae:5f:6f:ea:87:4d:78:e4:39:4d:20:7e:3d:6d:e9: e2:5e:d7:c9:3c:27:43:a4:29:44:85:a1:63:df:2f:55:a9:6a: 72:46:d8:fb:c7:cc:ca:43:e7:e1:2c:fe:55:2a:0d:17:76:d4: e5:49:8b:85:9f:fa:0e:f6:cc:e8:28:3e:8b:47:b0:e1:02:f0: 3d:73:3e:99:65:3b:91:32:c5:ce:e4:86:21:b2:e0:b4:15:b5: 22:63 root@master:/etc/puppet# CA's certificate: root@master:/etc/puppet# openssl x509 -text -noout -in /etc/puppet/ssl/certs/ca.pem Certificate: Data: Version: 3 (0x2) Serial Number: 1 (0x1) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:05 2012 GMT Not After : Apr 2 20:01:05 2017 GMT Subject: CN=Puppet CA: master.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:b5:2c:3e:26:a3:ae:43:b8:ed:1e:ef:4d:a1:1e: 82:77:78:c2:98:3f:e2:e0:05:57:f0:8d:80:09:36: 62:be:6c:1a:21:43:59:1d:e9:b9:4d:e0:9c:fa:09: aa:12:a1:82:58:fc:47:31:ed:ad:ad:73:01:26:97: ef:d2:d6:41:6b:85:3b:af:70:00:b9:63:e9:1b:c3: ce:57:6d:95:0e:a6:d2:64:bd:1f:2c:1f:5c:26:8e: 02:fd:d3:28:9e:e9:8f:bc:46:bb:dd:25:db:39:57: 81:ed:e5:c8:1f:3d:ca:39:cf:e7:f3:63:75:f6:15: 1f:d4:71:56:ed:84:50:fb:5d Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:TRUE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Certificate Sign, CRL Sign X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C Signature Algorithm: sha1WithRSAEncryption 1d:cd:c6:65:32:42:a5:01:62:46:87:10:da:74:7e:8b:c8:c9: 86:32:9e:c2:2e:c1:fd:00:79:f0:ef:d8:73:dd:7e:1b:1a:3f: cc:64:da:a3:38:ad:49:4e:c8:4d:e3:09:ba:bc:66:f2:6f:63: 9a:48:19:2d:27:5b:1d:2a:69:bf:4f:f4:e0:67:5e:66:84:30: e5:85:f4:49:6e:d0:92:ae:66:77:50:cf:45:c0:29:b2:64:87: 12:09:d3:10:4d:91:b6:f3:63:c4:26:b3:fa:94:2b:96:18:1f: 9b:a9:53:74:de:9c:73:a4:3a:8d:bf:fa:9c:c0:42:9d:78:49: 4d:70 root@master:/etc/puppet# Client's certificate: client:~# openssl x509 -text -noout -in /var/lib/puppet/ssl/certs/client.example.com.pem Certificate: Data: Version: 3 (0x2) Serial Number: 3 (0x3) Signature Algorithm: sha1WithRSAEncryption Issuer: CN=Puppet CA: master.example.com Validity Not Before: Apr 2 20:01:36 2012 GMT Not After : Apr 2 20:01:36 2017 GMT Subject: CN=client.example.com Subject Public Key Info: Public Key Algorithm: rsaEncryption RSA Public Key: (1024 bit) Modulus (1024 bit): 00:ae:88:6d:9b:e3:b1:fc:47:07:d6:bf:ea:53:d1: 14:14:9b:35:e6:70:43:e0:58:35:76:ac:c5:9d:86: 02:fd:77:28:fc:93:34:65:9d:dd:0b:ea:21:14:4d: 8a:95:2e:28:c9:a5:8d:a2:2c:0e:1c:a0:4c:fa:03: e5:aa:d3:97:98:05:59:3c:82:a9:7c:0e:e9:df:fd: 48:81:dc:33:dc:88:e9:09:e4:19:d6:e4:7b:92:33: 31:73:e4:f2:9c:42:75:b2:e1:9f:d9:49:8c:a7:eb: fa:7d:cb:62:22:90:1c:37:3a:40:95:a7:a0:3b:ad: 8e:12:7c:6e:ad:04:94:ed:47 Exponent: 65537 (0x10001) X509v3 extensions: X509v3 Basic Constraints: critical CA:FALSE Netscape Comment: Puppet Ruby/OpenSSL Internal Certificate X509v3 Key Usage: critical Digital Signature, Key Encipherment X509v3 Subject Key Identifier: 8C:2F:14:84:B6:A1:B5:0C:11:52:36:AB:E5:3F:F2:B9:B3:25:F3:1C X509v3 Extended Key Usage: critical TLS Web Server Authentication, TLS Web Client Authentication Signature Algorithm: sha1WithRSAEncryption 33:1f:ec:3c:91:5a:eb:c6:03:5f:a1:58:60:c3:41:ed:1f:fe: cb:b2:40:11:63:4d:ba:18:8a:8b:62:ba:ab:61:f5:a0:6c:0e: 8a:20:56:7b:10:a1:f9:1d:51:49:af:70:3a:05:f9:27:4a:25: d4:e6:88:26:f7:26:e0:20:30:2a:20:1d:c4:d3:26:f1:99:cf: 47:2e:73:90:bd:9c:88:bf:67:9e:dd:7c:0e:3a:86:6b:0b:8d: 39:0f:db:66:c0:b6:20:c3:34:84:0e:d8:3b:fc:1c:a8:6c:6c: b1:19:76:65:e6:22:3c:bf:ff:1c:74:bb:62:a0:46:02:95:fa: 83:41 client:~#

    Read the article

  • Why my VPN doesn't work anymore?

    - by xx77aBs
    I have openvpn server running on debian lenny. There is only one client - and it is running Windows 7 64-bit. This has worked for few months without any problems. And now, let's say for the last 7 days, it doesn't work at all. I connect successfully from client to the server, but I can't access anything through VPN. I have set it up so that all internet traffic is routed through VPN, and now when I connect with the client, the client can't do anything on the net (open any webpage, ping google, anything ...). Can you help me to figure out what's wrong ? I don't know where to start. I've also tried to connect to another openvpn server (I've installed and configured openvpn on another server, and when I try to connect to it result is the same). So I think there's something wrong with client ... Here is my connection log: Wed Apr 04 21:35:59 2012 OpenVPN 2.3-alpha1 Win32-MSVC++ [SSL (OpenSSL)] [LZO2] [PF_INET6] [IPv6 payload 20110522-1 (2.2.0)] built on Feb 21 2012 Enter Management Password: Wed Apr 04 21:35:59 2012 MANAGEMENT: TCP Socket listening on [AF_INET]127.0.0.10:25340 Wed Apr 04 21:35:59 2012 Need hold release from management interface, waiting... Wed Apr 04 21:36:00 2012 MANAGEMENT: Client connected from [AF_INET]127.0.0.10:25340 Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'state on' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'log all on' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'hold off' Wed Apr 04 21:36:00 2012 MANAGEMENT: CMD 'hold release' Wed Apr 04 21:36:00 2012 WARNING: No server certificate verification method has been enabled. See http://openvpn.net/howto.html#mitm for more info. Wed Apr 04 21:36:00 2012 NOTE: OpenVPN 2.1 requires '--script-security 2' or higher to call user-defined scripts or executables Wed Apr 04 21:36:00 2012 Socket Buffers: R=[8192->8192] S=[8192->8192] Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,RESOLVE,,, Wed Apr 04 21:36:00 2012 UDPv4 link local: [undef] Wed Apr 04 21:36:00 2012 UDPv4 link remote: [AF_INET]11.22.33.44:1234 Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,WAIT,,, Wed Apr 04 21:36:00 2012 MANAGEMENT: >STATE:1333568160,AUTH,,, Wed Apr 04 21:36:00 2012 TLS: Initial packet from [AF_INET]11.22.33.44:1234, sid=ee329574 f15e9e04 Wed Apr 04 21:36:00 2012 VERIFY OK: depth=1, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, CN=Fort-Funston CA, [email protected] Wed Apr 04 21:36:00 2012 VERIFY OK: depth=0, C=US, ST=CA, L=SanFrancisco, O=Fort-Funston, CN=server_key, [email protected] Wed Apr 04 21:36:01 2012 Data Channel Encrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Apr 04 21:36:01 2012 Data Channel Encrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Apr 04 21:36:01 2012 Data Channel Decrypt: Cipher 'BF-CBC' initialized with 128 bit key Wed Apr 04 21:36:01 2012 Data Channel Decrypt: Using 160 bit message hash 'SHA1' for HMAC authentication Wed Apr 04 21:36:01 2012 Control Channel: TLSv1, cipher TLSv1/SSLv3 DHE-RSA-AES256-SHA, 1024 bit RSA Wed Apr 04 21:36:01 2012 [server_key] Peer Connection Initiated with [AF_INET]11.22.33.44:1234 Wed Apr 04 21:36:02 2012 MANAGEMENT: >STATE:1333568162,GET_CONFIG,,, Wed Apr 04 21:36:03 2012 SENT CONTROL [server_key]: 'PUSH_REQUEST' (status=1) Wed Apr 04 21:36:03 2012 PUSH: Received control message: 'PUSH_REPLY,redirect-gateway def1,route 172.16.100.1,topology net30,ping 10,ping-restart 120,ifconfig 172.16.100.6 172.16.100.5' Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: timers and/or timeouts modified Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: --ifconfig/up options modified Wed Apr 04 21:36:03 2012 OPTIONS IMPORT: route options modified Wed Apr 04 21:36:03 2012 ROUTE_GATEWAY 192.168.1.1/255.255.255.0 I=15 HWADDR=00:1f:1f:3f:61:55 Wed Apr 04 21:36:03 2012 do_ifconfig, tt->ipv6=0, tt->did_ifconfig_ipv6_setup=0 Wed Apr 04 21:36:03 2012 MANAGEMENT: >STATE:1333568163,ASSIGN_IP,,172.16.100.6, Wed Apr 04 21:36:03 2012 open_tun, tt->ipv6=0 Wed Apr 04 21:36:03 2012 TAP-WIN32 device [VPN] opened: \\.\Global\{E28FD52B-F6C3-4094-A36A-30CB02FAC7E8}.tap Wed Apr 04 21:36:03 2012 TAP-Win32 Driver Version 9.9 Wed Apr 04 21:36:03 2012 Notified TAP-Win32 driver to set a DHCP IP/netmask of 172.16.100.6/255.255.255.252 on interface {E28FD52B-F6C3-4094-A36A-30CB02FAC7E8} [DHCP-serv: 172.16.100.5, lease-time: 31536000] Wed Apr 04 21:36:03 2012 Successful ARP Flush on interface [31] {E28FD52B-F6C3-4094-A36A-30CB02FAC7E8} Wed Apr 04 21:36:08 2012 TEST ROUTES: 2/2 succeeded len=1 ret=1 a=0 u/d=up Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 11.22.33.44 MASK 255.255.255.255 192.168.1.1 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=25 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 0.0.0.0 MASK 128.0.0.0 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 128.0.0.0 MASK 128.0.0.0 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 MANAGEMENT: >STATE:1333568168,ADD_ROUTES,,, Wed Apr 04 21:36:08 2012 C:\Windows\system32\route.exe ADD 172.16.100.1 MASK 255.255.255.255 172.16.100.5 Wed Apr 04 21:36:08 2012 ROUTE: CreateIpForwardEntry succeeded with dwForwardMetric1=30 and dwForwardType=4 Wed Apr 04 21:36:08 2012 Route addition via IPAPI succeeded [adaptive] Wed Apr 04 21:36:08 2012 Initialization Sequence Completed Wed Apr 04 21:36:08 2012 MANAGEMENT: >STATE:1333568168,CONNECTED,SUCCESS,172.16.100.6,11.22.33.44 Client's route table after connection with OpenVPN: IPv4 Route Table =========================================================================== Active Routes: Network Destination Netmask Gateway Interface Metric 0.0.0.0 0.0.0.0 192.168.1.1 192.168.1.41 281 0.0.0.0 128.0.0.0 172.16.100.1 172.16.100.6 31 94.23.53.45 255.255.255.255 192.168.1.1 192.168.1.41 25 127.0.0.0 255.0.0.0 On-link 127.0.0.1 306 127.0.0.1 255.255.255.255 On-link 127.0.0.1 306 127.255.255.255 255.255.255.255 On-link 127.0.0.1 306 128.0.0.0 128.0.0.0 172.16.100.1 172.16.100.6 31 172.16.100.4 255.255.255.252 On-link 172.16.100.6 286 172.16.100.6 255.255.255.255 On-link 172.16.100.6 286 172.16.100.7 255.255.255.255 On-link 172.16.100.6 286 192.168.1.0 255.255.255.0 On-link 192.168.1.41 281 192.168.1.41 255.255.255.255 On-link 192.168.1.41 281 192.168.1.255 255.255.255.255 On-link 192.168.1.41 281 224.0.0.0 240.0.0.0 On-link 127.0.0.1 306 224.0.0.0 240.0.0.0 On-link 192.168.1.41 281 224.0.0.0 240.0.0.0 On-link 172.16.100.6 286 255.255.255.255 255.255.255.255 On-link 127.0.0.1 306 255.255.255.255 255.255.255.255 On-link 192.168.1.41 281 255.255.255.255 255.255.255.255 On-link 172.16.100.6 286 =========================================================================== Persistent Routes: Network Address Netmask Gateway Address Metric 0.0.0.0 0.0.0.0 192.168.1.1 Default =========================================================================== IPv6 Route Table =========================================================================== Active Routes: If Metric Network Destination Gateway 13 58 ::/0 On-link 1 306 ::1/128 On-link 13 58 2001::/32 On-link 13 306 2001:0:5ef5:79fd:3cc3:6b9:ac7c:14db/128 On-link 15 281 fe80::/64 On-link 31 286 fe80::/64 On-link 13 306 fe80::/64 On-link 13 306 fe80::3cc3:6b9:ac7c:14db/128 On-link 31 286 fe80::7d72:9515:7213:35e3/128 On-link 15 281 fe80::9cec:ce3f:89de:a123/128 On-link 1 306 ff00::/8 On-link 13 306 ff00::/8 On-link 15 281 ff00::/8 On-link 31 286 ff00::/8 On-link =========================================================================== Persistent Routes: None

    Read the article

  • Installing Oracle 11gR2 on RHEL 6.2

    - by Chris
    Hello all I'm having some difficulty installing Oracle 11gR2 on RHEL 6.2 I have compiled a giant list of every single step I have taken so far I installed RHEL 6.2 on VMWARE it did it's easy install automatically I Selected 4gb of memory Selected max size of 80Gb Selected 2 processors Sorry for the bad styling copy paste isn't working correctly The version of oracle i downloaded is Linux x86-64 11.2.0.1 I am installing this on a local machine NOT a remote machine I followed the following documentation http://docs.oracle.com/cd/E11882_01/install.112/e24326/toc.htm I bolded the steps which I was least sure about from my research Easy installed with RHEL 6.2 for VMWARE Registered with red hat so I can get updates Reinstalled vmware-tools by pressing enter at every choice Sudo yum update at the end something about GPG key selected y then y Checked Memory Requirements grep MemTotal /proc/meminfo MemTotal: 3921368 kb uname -m x86_64 grep SwapTotal /proc/meminfo SwapTotal: 6160376 kb free total used free shared buffers cached Mem: 3921368 2032012 1889356 0 76216 1533268 -/+ buffers/cache: 422528 3498840 Swap: 6160376 0 6160376 df -h /dev/shm Filesystem Size Used Avail Use% Mounted on tmpfs 1.9G 276K 1.9G 1% /dev/shm df -h /tmp Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 73G 2.7G 67G 4% / tmpfs 1.9G 276K 1.9G 1% /dev/shm /dev/sda1 291M 58M 219M 21% /boot All looked fine to me except maybe for swap? Software Requirements cat /proc/version Linux version 2.6.32-220.el6.x86_64 ([email protected]) (gcc version 4.4.5 20110214 (Red Hat 4.4.5-6) (GCC) ) #1 SMP Wed Nov 9 08:03:13 EST 2011 uname -r 2.6.32-220.el6.x86_64 (same as above but whatever) According to the tutorial should be On Red Hat Enterprise Linux 6 2.6.32-71.el6.x86_64 or later These are the versions of software I have installed binutils-2.20.51.0.2-5.28.el6.x86_64 compat-libcap1-1.10-1.x86_64 compat-libstdc++-33-3.2.3-69.el6.x86_64 compat-libstdc++-33.i686 0:3.2.3-69.el6 gcc-4.4.6-3.el6.x86_64 gcc-c++.x86_64 0:4.4.6-3.el6 glibc-2.12-1.47.el6_2.12.x86_64 glibc-2.12-1.47.el6_2.12.i686 glibc-devel-2.12-1.47.el6_2.12.x86_64 glibc-devel.i686 0:2.12-1.47.el6_2.12 ksh.x86_64 0:20100621-12.el6_2.1 libgcc-4.4.6-3.el6.x86_64 libgcc-4.4.6-3.el6.i686 libstdc++-4.4.6-3.el6.x86_64 libstdc++.i686 0:4.4.6-3.el6 libstdc++-devel.i686 0:4.4.6-3.el6 libstdc++-devel-4.4.6-3.el6.x86_64 libaio-0.3.107-10.el6.x86_64 libaio-0.3.107-10.el6.i686 libaio-devel-0.3.107-10.el6.x86_64 libaio-devel-0.3.107-10.el6.i686 make-3.81-19.el6.x86_64 sysstat-9.0.4-18.el6.x86_64 unixODBC-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.x86_64 unixODBC-devel-2.2.14-11.el6.i686 unixODBC-2.2.14-11.el6.i686 8. Probably screwed up here or step 9 /usr/sbin/groupadd oinstall /usr/sbin/groupadd dba(not sure why this isn't in the tutorial) /usr/sbin/useradd -g oinstall -G dba oracle passwd oracle /sbin/sysctl -a | grep sem Xkernel.sem = 250 32000 32 128 /sbin/sysctl -a | grep shm kernel.shmmax = 68719476736 kernel.shmall = 4294967296 kernel.shmmni = 4096 vm.hugetlb_shm_group = 0 /sbin/sysctl -a | grep file-max Xfs.file-max = 384629 /sbin/sysctl -a | grep ip_local_port_range Xnet.ipv4.ip_local_port_range = 32768 61000 /sbin/sysctl -a | grep rmem_default Xnet.core.rmem_default = 124928 /sbin/sysctl -a | grep rmem_max Xnet.core.rmem_max = 131071 /sbin/sysctl -a | grep wmem_max Xnet.core.wmem_max = 131071 /sbin/sysctl -a | grep wmem_default Xnet.core.wmem_default = 124928 Here is my sysctl.conf file I only added the items that were bigger: Kernel sysctl configuration file for Red Hat Linux # For binary values, 0 is disabled, 1 is enabled. See sysctl(8) and sysctl.conf(5) for more details. Controls IP packet forwarding net.ipv4.ip_forward = 0 Controls source route verification net.ipv4.conf.default.rp_filter = 1 Do not accept source routing net.ipv4.conf.default.accept_source_route = 0 Controls the System Request debugging functionality of the kernel kernel.sysrq = 0 Controls whether core dumps will append the PID to the core filename. Useful for debugging multi-threaded applications. kernel.core_uses_pid = 1 Controls the use of TCP syncookies net.ipv4.tcp_syncookies = 1 Disable netfilter on bridges. net.bridge.bridge-nf-call-ip6tables = 0 net.bridge.bridge-nf-call-iptables = 0 net.bridge.bridge-nf-call-arptables = 0 Controls the maximum size of a message, in bytes kernel.msgmnb = 65536 Controls the default maxmimum size of a mesage queue kernel.msgmax = 65536 Controls the maximum shared segment size, in bytes kernel.shmmax = 68719476736 Controls the maximum number of shared memory segments, in pages kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 /sbin/sysctl -p net.ipv4.ip_forward = 0 net.ipv4.conf.default.rp_filter = 1 net.ipv4.conf.default.accept_source_route = 0 kernel.sysrq = 0 kernel.core_uses_pid = 1 net.ipv4.tcp_syncookies = 1 error: "net.bridge.bridge-nf-call-ip6tables" is an unknown key error: "net.bridge.bridge-nf-call-iptables" is an unknown key error: "net.bridge.bridge-nf-call-arptables" is an unknown key kernel.msgmnb = 65536 kernel.msgmax = 65536 kernel.shmmax = 68719476736 kernel.shmall = 4294967296 fs.aio-max-nr = 1048576 fs.file-max = 6815744 kernel.sem = 250 32000 100 128 net.ipv4.ip_local_port_range = 9000 65500 net.core.rmem_default = 262144 net.core.rmem_max = 4194304 net.core.wmem_default = 262144 net.core.wmem_max = 1048576 su - oracle ulimit -Sn 1024 ulimit -Hn 1024 ulimit -Su 1024 ulimit -Hu 30482 ulimit -Su 1024 ulimit -Ss 10240 ulimit -Hs unlimited su - nano /etc/security/limits.conf *added to the end of the file * oracle soft nproc 2047 oracle hard nproc 16384 oracle soft nofile 1024 oracle hard nofile 65536 oracle soft stack 10240 exit exit su - mkdir -p /app/ chown -R oracle:oinstall /app/ chmod -R 775 /app/ 9. THIS IS PROBABLY WHERE I MESSED UP I then exited out of the root account so now I'm back in my account chris then I su - oracle echo $SHELL /bin/bash umask 0022 (so it should be set already to what is neccesary) Also from what I have read I do not need to set the DISPLAY variable because I'm installing this on the localhost I then opened the .bash_profile of the oracle and changed it to the following .bash_profile Get the aliases and functions if [ -f ~/.bashrc ]; then . ~/.bashrc fi User specific environment and startup programs PATH=$PATH:$HOME/bin; export PATH ORACLE_BASE=/app/oracle ORACLE_SID=orcl export ORACLE_BASE ORACLE_SID I then shutdown the virtual machine shared my desktop folder from my windows 7 then turned back on the virtual machine logged in as chris opened up a terminal then: su - for some reason the shared folder didn't appear so I reinstalled vmware tools again and restarted then same as before su - cp -R linux_oracle/database /db; chown -R oracle:oinstall /db; chmod -R 775 /db; ll /db drwxrwxr-x. 8 oracle oinstall 4096 Jun 5 06:20 database exit su - oracle cd /db/database ./runInstaller AND FINALLY THE INFAMOUS JAVA:132 ERROR MESSAGE Starting Oracle Universal Installer... Checking Temp space: must be greater than 80 MB. Actual 65646 MB Passed Checking swap space: must be greater than 150 MB. Actual 6015 MB Passed Checking monitor: must be configured to display at least 256 colors. Actual 16777216 Passed Preparing to launch Oracle Universal Installer from /tmp/OraInstall2012-06-05_06-47-12AM. Please wait ...[oracle@localhost database]$ Exception in thread "main" java.lang.UnsatisfiedLinkError: /tmp/OraInstall2012-06-05_06-47-12AM/jdk/jre/lib/i386/xawt/libmawt.so: libXext.so.6: cannot open shared object file: No such file or directory at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1647) at java.lang.Runtime.load0(Runtime.java:769) at java.lang.System.load(System.java:968) at java.lang.ClassLoader$NativeLibrary.load(Native Method) at java.lang.ClassLoader.loadLibrary0(ClassLoader.java:1751) at java.lang.ClassLoader.loadLibrary(ClassLoader.java:1668) at java.lang.Runtime.loadLibrary0(Runtime.java:822) at java.lang.System.loadLibrary(System.java:993) at sun.security.action.LoadLibraryAction.run(LoadLibraryAction.java:50) at java.security.AccessController.doPrivileged(Native Method) at java.awt.Toolkit.loadLibraries(Toolkit.java:1509) at java.awt.Toolkit.(Toolkit.java:1530) at com.jgoodies.looks.LookUtils.isLowResolution(Unknown Source) at com.jgoodies.looks.LookUtils.(Unknown Source) at com.jgoodies.looks.plastic.PlasticLookAndFeel.(PlasticLookAndFeel.java:122) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:242) at javax.swing.SwingUtilities.loadSystemClass(SwingUtilities.java:1783) at javax.swing.UIManager.setLookAndFeel(UIManager.java:480) at oracle.install.commons.util.Application.startup(Application.java:758) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:164) at oracle.install.commons.flow.FlowApplication.startup(FlowApplication.java:181) at oracle.install.commons.base.driver.common.Installer.startup(Installer.java:265) at oracle.install.ivw.db.driver.DBInstaller.startup(DBInstaller.java:114) at oracle.install.ivw.db.driver.DBInstaller.main(DBInstaller.java:132)

    Read the article

  • Do Hyper-V guests see multiple CPUs (sockets) or multiple CPU cores when assigned more than 1 vCPU?

    - by Filip Kierzek
    I have SQL Server 2008 Express running on Hyper-V based virtual machine with two vCPU-s. I've just been reading up on SQL Server 2012 Express and noticed that it's CPU is "Limited to lesser of 1 Socket or 4 cores" (http://msdn.microsoft.com/en-us/library/cc645993(v=SQL.110).aspx) My question is how do the SQL Server 2012 limits on CPUs/Cores translate into vCPU-s? Are they "processors" or are they "cores"?

    Read the article

  • e2fsck extremely slow, although enough memory exists

    - by kaefert
    I've got this external USB-Disk: kaefert@blechmobil:~$ lsusb -s 2:3 Bus 002 Device 003: ID 0bc2:3320 Seagate RSS LLC As can be seen in this dmesg output, there is some problem that prevents that disk from beeing mounted: kaefert@blechmobil:~$ dmesg ... [ 113.084079] usb 2-1: new high-speed USB device number 3 using ehci_hcd [ 113.217783] usb 2-1: New USB device found, idVendor=0bc2, idProduct=3320 [ 113.217787] usb 2-1: New USB device strings: Mfr=2, Product=3, SerialNumber=1 [ 113.217790] usb 2-1: Product: Expansion Desk [ 113.217792] usb 2-1: Manufacturer: Seagate [ 113.217794] usb 2-1: SerialNumber: NA4J4N6K [ 113.435404] usbcore: registered new interface driver uas [ 113.455315] Initializing USB Mass Storage driver... [ 113.468051] scsi5 : usb-storage 2-1:1.0 [ 113.468180] usbcore: registered new interface driver usb-storage [ 113.468182] USB Mass Storage support registered. [ 114.473105] scsi 5:0:0:0: Direct-Access Seagate Expansion Desk 070B PQ: 0 ANSI: 6 [ 114.474342] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.475089] sd 5:0:0:0: [sdb] Write Protect is off [ 114.475092] sd 5:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 114.475959] sd 5:0:0:0: [sdb] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA [ 114.477093] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.501649] sdb: sdb1 [ 114.502717] sd 5:0:0:0: [sdb] 732566645 4096-byte logical blocks: (3.00 TB/2.72 TiB) [ 114.504354] sd 5:0:0:0: [sdb] Attached SCSI disk [ 116.804408] EXT4-fs (sdb1): ext4_check_descriptors: Checksum for group 3976 failed (47397!=61519) [ 116.804413] EXT4-fs (sdb1): group descriptors corrupted! ... So I went and fired up my favorite partition manager - gparted, and told it to verify and repair the partition sdb1. This made gparted call e2fsck (version 1.42.4 (12-Jun-2012)) e2fsck -f -y -v /dev/sdb1 Although gparted called e2fsck with the "-v" option, sadly it doesn't show me the output of my e2fsck process (bugreport https://bugzilla.gnome.org/show_bug.cgi?id=467925 ) I started this whole thing on Sunday (2012-11-04_2200) evening, so about 48 hours ago, this is what htop says about it now (2012-11-06-1900): PID USER PRI NI VIRT RES SHR S CPU% MEM% TIME+ Command 3704 root 39 19 1560M 1166M 768 R 98.0 19.5 42h56:43 e2fsck -f -y -v /dev/sdb1 Now I found a few posts on the internet that discuss e2fsck running slow, for example: http://gparted-forum.surf4.info/viewtopic.php?id=13613 where they write that its a good idea to see if the disk is just that slow because maybe its damaged, and I think these outputs tell me that this is not the case in my case: kaefert@blechmobil:~$ sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 3562 MB in 2.00 seconds = 1783.29 MB/sec Timing buffered disk reads: 82 MB in 3.01 seconds = 27.26 MB/sec kaefert@blechmobil:~$ sudo hdparm /dev/sdb /dev/sdb: multcount = 0 (off) readonly = 0 (off) readahead = 256 (on) geometry = 364801/255/63, sectors = 5860533160, start = 0 However, although I can read quickly from that disk, this disk speed doesn't seem to be used by e2fsck, considering tools like gkrellm or iotop or this: kaefert@blechmobil:~$ iostat -x Linux 3.2.0-2-amd64 (blechmobil) 2012-11-06 _x86_64_ (2 CPU) avg-cpu: %user %nice %system %iowait %steal %idle 14,24 47,81 14,63 0,95 0,00 22,37 Device: rrqm/s wrqm/s r/s w/s rkB/s wkB/s avgrq-sz avgqu-sz await r_await w_await svctm %util sda 0,59 8,29 2,42 5,14 43,17 160,17 53,75 0,30 39,80 8,72 54,42 3,95 2,99 sdb 137,54 5,48 9,23 0,20 587,07 22,73 129,35 0,07 7,70 7,51 16,18 2,17 2,04 Now I researched a little bit on how to find out what e2fsck is doing with all that processor time, and I found the tool strace, which gives me this: kaefert@blechmobil:~$ sudo strace -p3704 lseek(4, 41026998272, SEEK_SET) = 41026998272 write(4, "\212\354K[_\361\3nl\212\245\352\255jR\303\354\312Yv\334p\253r\217\265\3567\325\257\3766"..., 4096) = 4096 lseek(4, 48404766720, SEEK_SET) = 48404766720 read(4, "\7t\260\366\346\337\304\210\33\267j\35\377'\31f\372\252\ffU\317.y\211\360\36\240c\30`\34"..., 4096) = 4096 lseek(4, 41027002368, SEEK_SET) = 41027002368 write(4, "\232]7Ws\321\352\t\1@[+5\263\334\276{\343zZx\352\21\316`1\271[\202\350R`"..., 4096) = 4096 lseek(4, 48404770816, SEEK_SET) = 48404770816 read(4, "\17\362r\230\327\25\346//\210H\v\311\3237\323K\304\306\361a\223\311\324\272?\213\tq \370\24"..., 4096) = 4096 lseek(4, 41027006464, SEEK_SET) = 41027006464 write(4, "\367yy>x\216?=\324Z\305\351\376&\25\244\210\271\22\306}\276\237\370(\214\205G\262\360\257#"..., 4096) = 4096 lseek(4, 48404774912, SEEK_SET) = 48404774912 read(4, "\365\25\0\21|T\0\21}3t_\272\373\222k\r\177\303\1\201\261\221$\261B\232\3142\21U\316"..., 4096) = 4096 ^CProcess 3704 detached around 16 of these lines every second, so 4 read and 4 write operations every second, which I don't consider to be a lot.. And finally, my question: Will this process ever finish? If those numbers from fseek (48404774912) represent bytes, that would be something like 45 gigabytes, with this beeing a 3 terrabyte disk, which would give me 134 days to go, if the speed stays constant, and e2fsck scans the disk like this completly and only once. Do you have some advice for me? I have most of the data on that disk elsewhere, but I've put a lot of hours into sorting and merging it to this disk, so I would prefer to getting this disk up and running again, without formatting it anew. I don't think that the hardware is damaged since the disk is only a few months and since I can't see any I/O errors in the dmesg output. UPDATE: I just looked at the strace output again (2012-11-06_2300), now it looks like this: lseek(4, 1419860611072, SEEK_SET) = 1419860611072 read(4, "3#\f\2447\335\0\22A\355\374\276j\204'\207|\217V|\23\245[\7VP\251\242\276\207\317:"..., 4096) = 4096 lseek(4, 43018145792, SEEK_SET) = 43018145792 write(4, "]\206\231\342Y\204-2I\362\242\344\6R\205\361\324\177\265\317C\334V\324\260\334\275t=\10F."..., 4096) = 4096 lseek(4, 1419860615168, SEEK_SET) = 1419860615168 read(4, "\262\305\314Y\367\37x\326\245\226\226\320N\333$s\34\204\311\222\7\315\236\336\300TK\337\264\236\211n"..., 4096) = 4096 lseek(4, 43018149888, SEEK_SET) = 43018149888 write(4, "\271\224m\311\224\25!I\376\16;\377\0\223H\25Yd\201Y\342\r\203\271\24eG<\202{\373V"..., 4096) = 4096 lseek(4, 1419860619264, SEEK_SET) = 1419860619264 read(4, ";d\360\177\n\346\253\210\222|\250\352T\335M\33\260\320\261\7g\222P\344H?t\240\20\2548\310"..., 4096) = 4096 lseek(4, 43018153984, SEEK_SET) = 43018153984 write(4, "\360\252j\317\310\251G\227\335{\214`\341\267\31Y\202\360\v\374\307oq\3063\217Z\223\313\36D\211"..., 4096) = 4096 So the numbers in the lseek lines before the reads, like 1419860619264 are already a lot bigger, standing for 1.29 terabytes if those numbers are bytes, so it doesn't seem to be a linear progress on a big scale, maybe there are only some areas that need work, that have big gaps in between them. UPDATE2: Okey, big disappointment, the numbers are back to very small again (2012-11-07_0720) lseek(4, 52174548992, SEEK_SET) = 52174548992 read(4, "\374\312\22\\\325\215\213\23\0357U\222\246\370v^f(\312|f\212\362\343\375\373\342\4\204mU6"..., 4096) = 4096 lseek(4, 46603526144, SEEK_SET) = 46603526144 write(4, "\370\261\223\227\23?\4\4\217\264\320_Am\246CQ\313^\203U\253\274\204\277\2564n\227\177\267\343"..., 4096) = 4096 so either e2fsck goes over the data multiple times, or it just hops back and forth multiple times. Or my assumption that those numbers are bytes is wrong. UPDATE3: Since it's mentioned here http://forums.fedoraforum.org/showthread.php?t=282125&page=2 that you can testisk while e2fsck is running, i tried that, though not with a lot of success. When asking testdisk to display the data of my partition, this is what I get: TestDisk 6.13, Data Recovery Utility, November 2011 Christophe GRENIER <[email protected]> http://www.cgsecurity.org 1 P Linux 0 4 5 45600 40 8 732566272 Can't open filesystem. Filesystem seems damaged. And this is what strace currently gives me (2012-11-07_1030) lseek(4, 212460343296, SEEK_SET) = 212460343296 read(4, "\315Mb\265v\377Gn \24\f\205EHh\2349~\330\273\203\3375\206\10\r3=W\210\372\352"..., 4096) = 4096 lseek(4, 47347830784, SEEK_SET) = 47347830784 write(4, "]\204\223\300I\357\4\26\33+\243\312G\230\250\371*m2U\t_\215\265J \252\342Pm\360D"..., 4096) = 4096 (times are in CET)

    Read the article

  • How to nest transactions nicely - &quot;begin transaction&quot; vs &quot;save transaction&quot; and SQL Server

    - by Brian Biales
    Do you write stored procedures that might be used by others?  And those others may or may not have already started a transaction?  And your SP does several things, but if any of them fail, you have to undo them all and return with a code indicating it failed? Well, I have written such code, and it wasn’t working right until I finally figured out how to handle the case when we are already in a transaction, as well as the case where the caller did not start a transaction.  When a problem occurred, my “ROLLBACK TRANSACTION” would roll back not just my nested transaction, but the caller’s transaction as well.  So when I tested the procedure stand-alone, it seemed to work fine, but when others used it, it would cause a problem if it had to rollback.  When something went wrong in my procedure, their entire transaction was rolled back.  This was not appreciated. Now, I knew one could "nest" transactions, but the technical documentation was very confusing.  And I still have not found the approach below documented anywhere.  So here is a very brief description of how I got it to work, I hope you find this helpful. My example is a stored procedure that must figure out on its own if the caller has started a transaction or not.  This can be done in SQL Server by checking the @@TRANCOUNT value.  If no BEGIN TRANSACTION has occurred yet, this will have a value of 0.  Any number greater than zero means that a transaction is in progress.  If there is no current transaction, my SP begins a transaction. But if a transaction is already in progress, my SP uses SAVE TRANSACTION and gives it a name.  SAVE TRANSACTION creates a “save point”.  Note that creating a save point has no effect on @@TRANCOUNT.  So my SP starts with something like this: DECLARE @startingTranCount int SET @startingTranCount = @@TRANCOUNT IF @startingTranCount > 0 SAVE TRANSACTION mySavePointName ELSE BEGIN TRANSACTION -- … Then, when ready to commit the changes, you only need to commit if we started the transaction ourselves: IF @startingTranCount = 0 COMMIT TRANSACTION And finally, to roll back just your changes so far: -- Roll back changes... IF @startingTranCount > 0 ROLLBACK TRANSACTION MySavePointName ELSE ROLLBACK TRANSACTION Here is some code that you can try that will demonstrate how the save points work inside a transaction. This sample code creates a temporary table, then executes selects and updates, documenting what is going on, then deletes the temporary table. if running in SQL Management Studio, set Query Results to: Text for best readability of the results. -- Create a temporary table to test with, we'll drop it at the end. CREATE TABLE #ATable( [Column_A] [varchar](5) NULL ) ON [PRIMARY] GO SET NOCOUNT ON -- Ensure just one row - delete all rows, add one DELETE #ATable -- Insert just one row INSERT INTO #ATable VALUES('000') SELECT 'Before TRANSACTION starts, value in table is: ' AS Note, * FROM #ATable SELECT @@trancount AS CurrentTrancount --insert into a values ('abc') UPDATE #ATable SET Column_A = 'abc' SELECT 'UPDATED without a TRANSACTION, value in table is: ' AS Note, * FROM #ATable BEGIN TRANSACTION SELECT 'BEGIN TRANSACTION, trancount is now ' AS Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '123' SELECT 'Row updated inside TRANSACTION, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION MySavepoint SELECT 'Save point MySavepoint created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '456' SELECT 'Updated after MySavepoint created, value in table is: ' AS Note, * FROM #ATable SAVE TRANSACTION point2 SELECT 'Save point point2 created, transaction count now:' as Note, @@TRANCOUNT AS TranCount UPDATE #ATable SET Column_A = '789' SELECT 'Updated after point2 savepoint created, value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION point2 SELECT 'Just rolled back savepoint "point2", value in table is: ' AS Note, * FROM #ATable ROLLBACK TRANSACTION MySavepoint SELECT 'Just rolled back savepoint "MySavepoint", value in table is: ' AS Note, * FROM #ATable SELECT 'Both save points were rolled back, transaction count still:' as Note, @@TRANCOUNT AS TranCount ROLLBACK TRANSACTION SELECT 'Just rolled back the entire transaction..., value in table is: ' AS Note, * FROM #ATable DROP TABLE #ATable The output should look like this: Note                                           Column_A ---------------------------------------------- -------- Before TRANSACTION starts, value in table is:  000 CurrentTrancount ---------------- 0 Note                                               Column_A -------------------------------------------------- -------- UPDATED without a TRANSACTION, value in table is:  abc Note                                 TranCount ------------------------------------ ----------- BEGIN TRANSACTION, trancount is now  1 Note                                                Column_A --------------------------------------------------- -------- Row updated inside TRANSACTION, value in table is:  123 Note                                                   TranCount ------------------------------------------------------ ----------- Save point MySavepoint created, transaction count now: 1 Note                                                   Column_A ------------------------------------------------------ -------- Updated after MySavepoint created, value in table is:  456 Note                                              TranCount ------------------------------------------------- ----------- Save point point2 created, transaction count now: 1 Note                                                        Column_A ----------------------------------------------------------- -------- Updated after point2 savepoint created, value in table is:  789 Note                                                     Column_A -------------------------------------------------------- -------- Just rolled back savepoint "point2", value in table is:  456 Note                                                          Column_A ------------------------------------------------------------- -------- Just rolled back savepoint "MySavepoint", value in table is:  123 Note                                                        TranCount ----------------------------------------------------------- ----------- Both save points were rolled back, transaction count still: 1 Note                                                            Column_A --------------------------------------------------------------- -------- Just rolled back the entire transaction..., value in table is:  abc

    Read the article

  • What statistics can be maintained for a set of numerical data without iterating?

    - by Dan Tao
    Update Just for future reference, I'm going to list all of the statistics that I'm aware of that can be maintained in a rolling collection, recalculated as an O(1) operation on every addition/removal (this is really how I should've worded the question from the beginning): Obvious Count Sum Mean Max* Min* Median** Less Obvious Variance Standard Deviation Skewness Kurtosis Mode*** Weighted Average Weighted Moving Average**** OK, so to put it more accurately: these are not "all" of the statistics I'm aware of. They're just the ones that I can remember off the top of my head right now. *Can be recalculated in O(1) for additions only, or for additions and removals if the collection is sorted (but in this case, insertion is not O(1)). Removals potentially incur an O(n) recalculation for non-sorted collections. **Recalculated in O(1) for a sorted, indexed collection only. ***Requires a fairly complex data structure to recalculate in O(1). ****This can certainly be achieved in O(1) for additions and removals when the weights are assigned in a linearly descending fashion. In other scenarios, I'm not sure. Original Question Say I maintain a collection of numerical data -- let's say, just a bunch of numbers. For this data, there are loads of calculated values that might be of interest; one example would be the sum. To get the sum of all this data, I could... Option 1: Iterate through the collection, adding all the values: double sum = 0.0; for (int i = 0; i < values.Count; i++) sum += values[i]; Option 2: Maintain the sum, eliminating the need to ever iterate over the collection just to find the sum: void Add(double value) { values.Add(value); sum += value; } void Remove(double value) { values.Remove(value); sum -= value; } EDIT: To put this question in more relatable terms, let's compare the two options above to a (sort of) real-world situation: Suppose I start listing numbers out loud and ask you to keep them in your head. I start by saying, "11, 16, 13, 12." If you've just been remembering the numbers themselves and nothing more, and then I say, "What's the sum?", you'd have to think to yourself, "OK, what's 11 + 16 + 13 + 12?" before responding, "52." If, on the other hand, you had been keeping track of the sum yourself while I was listing the numbers (i.e., when I said, "11" you thought "11", when I said "16", you thought, "27," and so on), you could answer "52" right away. Then if I say, "OK, now forget the number 16," if you've been keeping track of the sum inside your head you can simply take 16 away from 52 and know that the new sum is 36, rather than taking 16 off the list and them summing up 11 + 13 + 12. So my question is, what other calculations, other than the obvious ones like sum and average, are like this? SECOND EDIT: As an arbitrary example of a statistic that (I'm almost certain) does require iteration -- and therefore cannot be maintained as simply as a sum or average -- consider if I asked you, "how many numbers in this collection are divisible by the min?" Let's say the numbers are 5, 15, 19, 20, 21, 25, and 30. The min of this set is 5, which divides into 5, 15, 20, 25, and 30 (but not 19 or 21), so the answer is 5. Now if I remove 5 from the collection and ask the same question, the answer is now 2, since only 15 and 30 are divisible by the new min of 15; but, as far as I can tell, you cannot know this without going through the collection again. So I think this gets to the heart of my question: if we can divide kinds of statistics into these categories, those that are maintainable (my own term, maybe there's a more official one somewhere) versus those that require iteration to compute any time a collection is changed, what are all the maintainable ones? What I am asking about is not strictly the same as an online algorithm (though I sincerely thank those of you who introduced me to that concept). An online algorithm can begin its work without having even seen all of the input data; the maintainable statistics I am seeking will certainly have seen all the data, they just don't need to reiterate through it over and over again whenever it changes.

    Read the article

< Previous Page | 786 787 788 789 790 791 792 793 794 795 796 797  | Next Page >