Search Results

Search found 30085 results on 1204 pages for 'sql trace'.

Page 703/1204 | < Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >

  • ARR troubleshooting 502.3 / WinHttp tracing on Server 2012

    - by nachojammers
    I have the following scenario: 3 windows server 2012 virtual servers, all with IIS 8: 1 server with Application Request Routing 3 2 servers with the web applications that the ARR server routes to I am getting intermittent 502 3 12002 errors. Following this guide http://www.iis.net/learn/extensions/troubleshooting-application-request-routing/troubleshooting-502-errors-in-arr I have identified that I need to trace using netsh the WinHttp/WebIO providers to get to the real error code that is mapped to the 12002 error code. I run the trace as the article suggests: netsh trace start scenario=internetclient capture=yes persistent=no level=verbose tracefile=c:\temp\net.etl When analysing the output of the netsh traces, I don't get the level of information that the article suggests I should. Specifically I only get the following types of entry in the trace viewed using netmon: WINHTTP_MicrosoftWindowsWinHttp:Stopping WorkItem Thread Action... WINHTTP_MicrosoftWindowsWinHttp:Starting WorkItem Thread Action... WINHTTP_MicrosoftWindowsWinHttp:Queue Overlapped IO Thread Action... I certainly don't get anything detailed enough that would help me understand why am getting any timeouts. Is there any reason why Server 2012 wouldn't trace the WinHttp API to the level I need? Thanks

    Read the article

  • Troubles installing/starting Redis via Resque

    - by Craig Flannagan
    Trying to complete instructions for Resque/Redis installation here: https://github.com/defunkt/resque/blob/master/README.markdown Am stuck at where I'm trying to start up Redis via Resque at the following command: Craig:/usr/local/src/resque$ rake redis:start (in /usr/local/src/resque) Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] (See full trace by running task with --trace) Rerunning with --trace (showing only part of trace): Craig:/usr/local/src/resque$ rake redis:start --trace (in /usr/local/src/resque) ** Invoke redis:start (first_time) ** Execute redis:start Detach with Ctrl+\ Re-attach with rake redis:attach ../../bin/dtach -A /tmp/redis.dtach ../../bin/redis-server ../../../etc/redis.conf rake aborted! Command failed with status (127): [../../bin/dtach -A /tmp/redis.dtach ../../...] /Users/craigflannagan/.rvm/gems/ruby-1.9.2-head@foo/gems/rake-0.8.7/lib/rake.rb:995:in `block in sh' Not sure what is wrong here - by the way, when I did those instructions $ git clone git://github.com/defunkt/resque.git $ cd resque $ PREFIX=<your_prefix> rake redis:install dtach:install $ rake redis:start I wasn't sure whether or not I was supposed to be doing #1 from within the Rails project, or if I was supposed to have the git clone create a new folder outside the Rails project (in this case, I chose to have folder created outside the project).

    Read the article

  • MSSQL 2000 installation error: Setup failed to configure the server. Refer to the server error logs.

    - by kaneuniversal
    I'm trying to install MSSQL 2000 on a virtual Windows 2003 instance. However, every time I run the install program, it fails to start the service. This is the error log: 21:46:50 C:\Program Files\Microsoft SQL Server\80\Tools\Binn\cnfgsvr.exe -F "C:\WINDOWS\sqlstp.log" -I MSSQLSERVER -V 1 -M 0 -Q "SQL_Latin1_General_CP1_CI_AS" -H 131408 -U sa -P ############################################################################### Starting Service ... SQL_Latin1_General_CP1_CI_AS -m -Q -T4022 -T3659 Connecting to Server ... driver={sql server};server=xxxxxxxxxx;UID=sa;PWD=;database=master [Microsoft][ODBC SQL Server Driver]Timeout expired driver={sql server};server=xxxxxxxxxx;UID=sa;PWD=;database=master [Microsoft][ODBC SQL Server Driver]Timeout expired driver={sql server};server=xxxxxxxxxx;UID=sa;PWD=;database=master [Microsoft][ODBC SQL Server Driver]Timeout expired SQL Server configuration failed. ############################################################################### 21:49:34 Process Exit Code: (-1) 22:19:04 Setup failed to configure the server. Refer to the server error logs and C:\WINDOWS\sqlstp.log for more information. 22:19:04 Action CleanUpInstall: 22:19:04 C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\1\SqlSetup\Bin\scm.exe -Silent 1 -Action 4 -Service SQLSERVERAGENT 22:19:05 Process Exit Code: (1060) The specified service does not exist as an installed service. 22:19:05 C:\DOCUME~1\ADMINI~1\LOCALS~1\Temp\1\SqlSetup\Bin\scm.exe -Silent 1 -Action 4 -Service MSSQLSERVER 22:19:05 Process Exit Code: (0) 22:19:05 StatsGenerate returned: 2 22:19:05 StatsGenerate (0x0,0x1,0xf00000,0x200,1033,303,0x0,0x1,0,0,0 22:19:05 StatsGenerate -1,Administrator) 22:19:05 Installation Failed. Has anyone had this problem? Any ideas about how to fix it? Thanks very much, Michael

    Read the article

  • jboss 5.1 mysql connection pooling

    - by boyd4715
    I am using JBOSS 5.1.0.GA, MySQL 5.5 and Hibernate 3.3.1 GA (included with JBOSS) + Spring. My question is do I need to add c3p0 as a data source in my spring/hibernate configuration for connection pooling or are the setting in the JBOSS mysql-ds.xml setting enough. My mysql-ds.xml is the following: <datasources> <local-tx-datasource> <jndi-name>MySqlDS</jndi-name> <connection-url>jdbc:mysql://localhost:3306/ecotrak</connection-url> <driver-class>com.mysql.jdbc.Driver</driver-class> <user-name>ecotrak</user-name> <password>ecotrak</password> <min-pool-size>5</min-pool-size> <max-pool-size>20</max-pool-size> <idle-timeout-minutes>5</idle-timeout-minutes> <exception-sorter-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLExceptionSorter</exception-sorter-class-name> <!-- should only be used on drivers after 3.22.1 with "ping" support --> <valid-connection-checker-class-name>org.jboss.resource.adapter.jdbc.vendor.MySQLValidConnectionChecker</valid-connection-checker-class-name> <!-- sql to call when connection is created <new-connection-sql>some arbitrary sql</new-connection-sql> --> <!-- sql to call on an existing pooled connection when it is obtained from pool - MySQLValidConnectionChecker is preferred for newer drivers <check-valid-connection-sql>some arbitrary sql</check-valid-connection-sql> --> <!-- corresponding type-mapping in the standardjbosscmp-jdbc.xml (optional) --> <metadata> <type-mapping>mySQL</type-mapping> </metadata> </local-tx-datasource> </datasources>

    Read the article

  • I/O Error with PHP5-FPM, ptrace(PEEKDATA) failed

    - by MultiformeIngegno
    I got a lot of these: [NOTICE] child 19214 stopped for tracing [NOTICE] about to trace 19214 [ERROR] ptrace(PEEKDATA) failed: Input/output error (5) [NOTICE] finished trace of 19214 [WARNING] [pool www] child 19208, script 'blahblah.php' executing too slow (30.041419 sec), logging [NOTICE] child 19208 stopped for tracing [NOTICE] about to trace 19208 [ERROR] ptrace(PEEKDATA) failed: Input/output error (5) [NOTICE] finished trace of 19208 [WARNING] [pool www] child 19218, script 'blahblah.php' executing too slow (30.035029 sec), logging And when php reaches max children (at least I presume that's the case) it stops "working"... now I know I can increase max_children (currently set to 9) but there's a way to stop php from "dying"? I'm on a VPS with 1 core and 512 MB of RAM (PHP5-FPM 5.4.4 + APC 3.1.10).

    Read the article

  • Cheapest iSCSI SAN for Windows 2008/SQL Server clustering?

    - by MichaelGG
    Are there any production-quality iSCSI SANs suitable for use with Windows Server 2008/SQL Server for failover clustering? So far, I've only seen Dell's MD3000i, and HP's MSA 2000 (2012i), which both are around $6K with a minimal disk configuration. Buffalo (yea, I know), has a $1000 device with iSCSI support, but they say it will not work for 2008 failover clustering. I'm interested in seeing something suitable for failover in a production environment, but with very low IO requirements. (Clustering, say, a 30GB DB.) As for using software: On Windows, StarWind seems to have a great solution. But it's actually more money than buying a hardware SAN. (As I understand, only the enterprise edition supports having replicas, and that's $3000 a license.) I was thinking I could use Linux, something like DRBD + an iSCSI target would be fine. However, I haven't seen any free or low-cost iSCSI software that supports SCSI-3 persistent reservations, which Windows 2008 needs for failover clustering. I know $6K isn't much at all, just curious to see if there are practical cheaper solutions out there. And finally, yes, the software is expensive, but many small business get MS BizSpark, so the Windows 2008 Enterprise / SQL 2008 licenses are completely free.

    Read the article

  • centos dedicated Server unresponsive for the first time

    - by Ambrose Bwangatto
    server was unresponsive for an hour so i rebooted it and checked /var/log/messages and found this. can anybody point out whats wrong ? Sep 28 07:39:35 www kernel: INFO: task mysqld:22749 blocked for more than 120 seconds. Sep 28 07:39:35 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:39:35 www kernel: mysqld D ffff810001015120 0 22749 3266 22792 22659 (NOTLB) Sep 28 07:39:35 www kernel: ffff810139d21e58 0000000000000086 ffff810036217000 ffffffff8000f758 Sep 28 07:39:35 www kernel: ffff81020dfd1408 0000000000000007 ffff8101cfbaf7e0 ffff81020fca5080 Sep 28 07:39:35 www kernel: 00017a451524782a 00000000000043b2 ffff8101cfbaf9c8 0000000280009a22 Sep 28 07:39:35 www kernel: Call Trace: Sep 28 07:39:35 www kernel: [<ffffffff8000f758>] generic_permission+0x52/0xca Sep 28 07:39:35 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:39:35 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:39:35 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:39:35 www kernel: [<ffffffff8003c618>] do_unlinkat+0x66/0x141 Sep 28 07:39:35 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:39:57 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:39:58 www kernel: Sep 28 07:39:59 www kernel: INFO: task httpd:22679 blocked for more than 120 seconds. Sep 28 07:40:04 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:40:08 www kernel: httpd D ffff81000100caa0 0 22679 22413 22680 22678 (NOTLB) Sep 28 07:40:51 www kernel: ffff81018b0dbc78 0000000000000086 ffff81018b0dbc88 0000004480063002 Sep 28 07:41:52 www kernel: ffff81000001cc00 0000000000000007 ffff81013ac5e860 ffff81020fc96100 Sep 28 07:43:10 www kernel: 00017a44de6376c8 000000000000a89f ffff81013ac5ea48 000000010001cc00 Sep 28 07:43:38 www kernel: Call Trace: Sep 28 07:44:06 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:44:09 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:44:10 www kernel: [<ffffffff8000d0b2>] do_lookup+0x90/0x1e6 Sep 28 07:44:13 www kernel: [<ffffffff8000a2e9>] __link_path_walk+0xa3a/0xfd1 Sep 28 07:44:16 www kernel: [<ffffffff8000eb8e>] link_path_walk+0x45/0xb8 Sep 28 07:44:16 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:44:29 www kernel: [<ffffffff800129ad>] getname+0x15b/0x1c2 Sep 28 07:44:38 www kernel: [<ffffffff80023b60>] __user_walk_fd+0x37/0x4c Sep 28 07:44:42 www kernel: [<ffffffff80028ada>] vfs_stat_fd+0x1b/0x4a Sep 28 07:44:43 www kernel: [<ffffffff8003c69a>] do_unlinkat+0xe8/0x141 Sep 28 07:45:02 www kernel: [<ffffffff80023890>] sys_newstat+0x19/0x31 Sep 28 07:46:18 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:46:43 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:46:55 www kernel: Sep 28 07:46:58 www kernel: INFO: task php:28906 blocked for more than 120 seconds. Sep 28 07:46:59 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:47:00 www kernel: php D ffff810165127000 0 28906 28905 (NOTLB) Sep 28 07:47:37 www kernel: ffff810078431e58 0000000000000082 ffff810165127000 ffffffff8000f758 Sep 28 07:48:29 www kernel: ffff81020dfd1408 0000000000000007 ffff8101247b9860 ffff810207d0e100 Sep 28 07:48:36 www kernel: 00017a4218932fae 0000000000377111 ffff8101247b9a48 0000000280009a22 Sep 28 07:48:37 www kernel: Call Trace: Sep 28 07:48:37 www kernel: [<ffffffff8000f758>] generic_permission+0x52/0xca Sep 28 07:48:37 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:48:37 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:48:41 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:48:41 www kernel: [<ffffffff8003c618>] do_unlinkat+0x66/0x141 Sep 28 07:48:42 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:48:42 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:48:42 www kernel: Sep 28 07:48:43 www kernel: INFO: task php:29032 blocked for more than 120 seconds. Sep 28 07:48:45 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:48:46 www kernel: php D 0000000000000004 0 29032 1 29050 29024 (NOTLB) Sep 28 07:48:46 www kernel: ffff81006b465dc8 0000000000000086 ffff81020dfd1408 ffffffff80009a22 Sep 28 07:48:46 www kernel: 0000000000000000 0000000000000007 ffff81002946e860 ffff81003c943100 Sep 28 07:48:46 www kernel: 00017a4211450766 000000000024be3d ffff81002946ea48 000000020e42b300 Sep 28 07:48:52 www kernel: Call Trace: Sep 28 07:48:54 www kernel: [<ffffffff80009a22>] __link_path_walk+0x173/0xfd1 Sep 28 07:48:54 www kernel: [<ffffffff8002cc58>] mntput_no_expire+0x19/0x89 Sep 28 07:48:55 www kernel: [<ffffffff8000ebf5>] link_path_walk+0xac/0xb8 Sep 28 07:48:55 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:48:55 www kernel: [<ffffffff80023974>] __path_lookup_intent_open+0x56/0x97 Sep 28 07:48:55 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:48:55 www kernel: [<ffffffff8001b260>] open_namei+0xea/0x718 Sep 28 07:48:59 www kernel: [<ffffffff80067235>] do_page_fault+0x4cc/0x842 Sep 28 07:49:01 www kernel: [<ffffffff80027726>] do_filp_open+0x1c/0x38 Sep 28 07:49:01 www kernel: [<ffffffff8001a09c>] do_sys_open+0x44/0xbe Sep 28 07:49:02 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:49:03 www kernel: Sep 28 07:49:07 www kernel: INFO: task mysqld:22749 blocked for more than 120 seconds. Sep 28 07:49:09 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:49:09 www kernel: mysqld D ffff810001015120 0 22749 3266 22792 22659 (NOTLB) Sep 28 07:49:14 www kernel: ffff810139d21e58 0000000000000086 ffff810036217000 ffffffff8000f758 Sep 28 07:49:14 www kernel: ffff81020dfd1408 0000000000000007 ffff8101cfbaf7e0 ffff81020fca5080 Sep 28 07:49:15 www kernel: 00017a451524782a 00000000000043b2 ffff8101cfbaf9c8 0000000280009a22 Sep 28 07:49:15 www kernel: Call Trace: Sep 28 07:49:22 www kernel: [<ffffffff8000f758>] generic_permission+0x52/0xca Sep 28 07:49:23 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:49:23 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:49:23 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:49:23 www kernel: [<ffffffff8003c618>] do_unlinkat+0x66/0x141 Sep 28 07:49:23 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:49:23 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:49:23 www kernel: Sep 28 07:49:23 www kernel: INFO: task php:29024 blocked for more than 120 seconds. Sep 28 07:49:23 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:49:24 www kernel: php D ffff8101920a0000 0 29024 1 29032 29001 (NOTLB) Sep 28 07:49:26 www kernel: ffff8101cca8fe58 0000000000000086 ffff8101920a0000 ffffffff8000f758 Sep 28 07:49:26 www kernel: ffff81020dfd1408 0000000000000007 ffff81000b64b040 ffff8101e05337e0 Sep 28 07:49:26 www kernel: 00017a552aef9f35 0000000000009513 ffff81000b64b228 0000000180009a22 Sep 28 07:49:27 www kernel: Call Trace: Sep 28 07:49:27 www kernel: [<ffffffff8000f758>] generic_permission+0x52/0xca Sep 28 07:49:27 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:49:27 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:49:27 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:49:27 www kernel: [<ffffffff8003c618>] do_unlinkat+0x66/0x141 Sep 28 07:49:27 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:49:27 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:49:27 www kernel: Sep 28 07:49:27 www kernel: INFO: task php:29050 blocked for more than 120 seconds. Sep 28 07:49:28 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:49:28 www kernel: php D ffff810201d95000 0 29050 1 29032 (NOTLB) Sep 28 07:49:28 www kernel: ffff810051e45e58 0000000000000086 ffff810201d95000 ffffffff8000f758 Sep 28 07:49:28 www kernel: ffff81020dfd1408 0000000000000007 ffff81001c23f080 ffff81020f5e2080 Sep 28 07:49:29 www kernel: 00017a5d0bc2aa75 0000000000d0ecfe ffff81001c23f268 0000000280009a22 Sep 28 07:49:29 www kernel: Call Trace: Sep 28 07:49:29 www kernel: [<ffffffff8000f758>] generic_permission+0x52/0xca Sep 28 07:49:29 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:49:29 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:49:34 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:49:35 www kernel: [<ffffffff8003c618>] do_unlinkat+0x66/0x141 Sep 28 07:49:37 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:49:37 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:49:37 www kernel: Sep 28 07:49:37 www kernel: INFO: task php:29064 blocked for more than 120 seconds. Sep 28 07:49:37 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:49:37 www kernel: php D ffff81009c231000 0 29064 29057 (NOTLB) Sep 28 07:49:38 www kernel: ffff8100a5dc7e58 0000000000000086 ffff81009c231000 ffffffff8000f758 Sep 28 07:49:38 www kernel: ffff81020dfd1408 0000000000000007 ffff81000a850820 ffff8102038037a0 Sep 28 07:49:38 www kernel: 00017a5bb5c6846e 000000000000861a ffff81000a850a08 0000000080009a22 Sep 28 07:49:38 www kernel: Call Trace: Sep 28 07:49:38 www kernel: [<ffffffff8000f758>] generic_permission+0x52/0xca Sep 28 07:49:38 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:49:38 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:49:38 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:49:38 www kernel: [<ffffffff8003c618>] do_unlinkat+0x66/0x141 Sep 28 07:49:38 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:49:40 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:49:42 www kernel: Sep 28 07:49:42 www kernel: INFO: task mysqld:24612 blocked for more than 120 seconds. Sep 28 07:49:43 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:49:46 www kernel: mysqld D ffff81020dfd14c0 0 24612 3266 19643 3599 (NOTLB) Sep 28 07:49:46 www kernel: ffff81019e517c78 0000000000000086 ffff81019e517c88 ffffffff80063002 Sep 28 07:49:47 www kernel: ffff810201966558 0000000000000009 ffff81015fa560c0 ffff8101c263b860 Sep 28 07:49:51 www kernel: 00017a9d113e27fe 0000000000008d5a ffff81015fa562a8 000000018006ec9f Sep 28 07:49:52 www kernel: Call Trace: Sep 28 07:49:52 www kernel: [<ffffffff80063002>] thread_return+0x62/0xfe Sep 28 07:49:52 www kernel: [<ffffffff8005a46a>] getnstimeofday+0x10/0x29 Sep 28 07:49:53 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:49:54 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:49:54 www kernel: [<ffffffff8000d0b2>] do_lookup+0x90/0x1e6 Sep 28 07:49:56 www kernel: [<ffffffff8000a2e9>] __link_path_walk+0xa3a/0xfd1 Sep 28 07:50:00 www kernel: [<ffffffff8000eb8e>] link_path_walk+0x45/0xb8 Sep 28 07:50:03 www kernel: [<ffffffff8000cea2>] do_path_lookup+0x294/0x310 Sep 28 07:50:04 www kernel: [<ffffffff800129ad>] getname+0x15b/0x1c2 Sep 28 07:50:06 www kernel: [<ffffffff80023b60>] __user_walk_fd+0x37/0x4c Sep 28 07:50:06 www kernel: [<ffffffff8003f013>] vfs_lstat_fd+0x18/0x47 Sep 28 07:50:08 www kernel: [<ffffffff8002ad91>] sys_newlstat+0x19/0x31 Sep 28 07:50:10 www kernel: [<ffffffff8005d229>] tracesys+0x71/0xe0 Sep 28 07:50:15 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:50:19 www kernel: Sep 28 07:50:19 www kernel: INFO: task php:29178 blocked for more than 120 seconds. Sep 28 07:50:23 www kernel: "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Sep 28 07:50:23 www kernel: php D 0000000000000003 0 29178 29123 (NOTLB) Sep 28 07:50:23 www kernel: ffff81004a95bdc8 0000000000000086 ffff81020dfd1408 ffffffff80009a22 Sep 28 07:50:24 www kernel: ffffffff800a2fd0 0000000000000007 ffff8101937a4040 ffff81010bde27a0 Sep 28 07:50:26 www kernel: 00017aa3a1d89c9b 000000000000d66e ffff8101937a4228 000000020e42b300 Sep 28 07:50:26 www kernel: Call Trace: Sep 28 07:50:26 www kernel: [<ffffffff80009a22>] __link_path_walk+0x173/0xfd1 Sep 28 07:50:27 www kernel: [<ffffffff800a2fd0>] wake_bit_function+0x0/0x23 Sep 28 07:50:27 www kernel: [<ffffffff8002cc58>] mntput_no_expire+0x19/0x89 Sep 28 07:50:27 www kernel: [<ffffffff8000ebf5>] link_path_walk+0xac/0xb8 Sep 28 07:50:28 www kernel: [<ffffffff80063c63>] __mutex_lock_slowpath+0x60/0x9b Sep 28 07:50:32 www kernel: [<ffffffff80023974>] __path_lookup_intent_open+0x56/0x97 Sep 28 07:50:32 www kernel: [<ffffffff80063cad>] .text.lock.mutex+0xf/0x14 Sep 28 07:50:34 www kernel: [<ffffffff8001b260>] open_namei+0xea/0x718 Sep 28 07:50:34 www kernel: [<ffffffff80067235>] do_page_fault+0x4cc/0x842 Sep 28 07:50:35 www kernel: [<ffffffff80027726>] do_filp_open+0x1c/0x38 Sep 28 07:50:35 www kernel: [<ffffffff8001a09c>] do_sys_open+0x44/0xbe Sep 28 07:50:35 www kernel: [<ffffffff8005d28d>] tracesys+0xd5/0xe0 Sep 28 07:50:35 www kernel: Sep 28 07:56:41 www kernel: proftpd invoked oom-killer: gfp_mask=0x201d2, order=0, oomkilladj=0 Sep 28 07:56:41 www kernel: Sep 28 07:56:41 www kernel: Call Trace: Sep 28 07:56:41 www kernel: [<ffffffff800c9f35>] out_of_memory+0x8e/0x2f3 Sep 28 07:56:41 www kernel: [<ffffffff800a2fa2>] autoremove_wake_function+0x0/0x2e Sep 28 07:56:41 www kernel: [<ffffffff8000f67d>] __alloc_pages+0x27f/0x308 Sep 28 07:56:41 www kernel: [<ffffffff80013047>] __do_page_cache_readahead+0x96/0x17b Sep 28 07:56:41 www kernel: [<ffffffff80013984>] filemap_nopage+0x14c/0x360 Sep 28 07:56:41 www kernel: [<ffffffff80008972>] __handle_mm_fault+0x1fd/0x103b Sep 28 07:56:41 www kernel: [<ffffffff800a4fe1>] ktime_get_ts+0x1a/0x4e Sep 28 07:56:41 www kernel: [<ffffffff80067202>] do_page_fault+0x499/0x842 Sep 28 07:56:41 www kernel: [<ffffffff8003ad91>] hrtimer_try_to_cancel+0x4a/0x53 Sep 28 07:58:10 www kernel: [<ffffffff80033541>] do_setitimer+0xd0/0x689 Sep 28 08:26:22 www syslogd 1.4.1: restart. Sep 28 08:26:22 www kernel: klogd 1.4.1, log source = /proc/kmsg started. Sep 28 08:26:22 www kernel: Linux version 2.6.18-274.17.1.el5 ([email protected]) (gcc version 4.1.2 20080704 (Red Hat 4.1.2-51)) #1 SMP Tue Jan 10 17:25:58 EST 2012 Sep 28 08:26:22 www kernel: Command line: ro root=LABEL=/

    Read the article

  • Has anyone used tools like (Chef, Salt, Puppet, CfEngine) to configure a 2008 Win Server with Sql?

    - by Development 4.0
    I have been looking into tools to automate the creation of servers. For two different reasons: Production Development machines I love the idea of the immutable server. I have seen the tools demoed and used successfully on *nix boxes running Rails or Lamp etc. Has anyone found a good way to do this in the Microsoft stack? I would like to get in on the fun and create scripts that will install Windows, patch it according to specification, deploy Sql Server create scripts to build out a database and just for fun deploy SharePoint and configure it, and then deploy a SharePoint solution to it. I can get part of the way, install Windows manually, install Sql Server manually, use Powershell to do all the configuration and setup. Install SharePoint and configure part of it, then powershell for the rest of the configuration and deploying a solution. I would love to have the ability to run one script though, or at least one unified process. I can, and have mostly used VM template images and then instantiated them, but the creation of the template is usually a manual step.

    Read the article

  • Why does squid reject this multipart-form-data POST from curl?

    - by keturn
    This fails: $ curl --trace multipart-fail.log -F "source={}" http://127.0.0.1:3003/jslint With a squid status 417 error, ERR_INVALID_REQ. trace of failing curl request trace of successful curl request that uses urlencoding (curl -d) instead of multipart (curl -F) formatted version of squid's error message I've never had this in practice through a web browser, so it's probably curl usage instead of squid, but if I tell curl not to use the squid proxy, the web application on the other end accepts it just fine. (If there's a more appropriate StackExchange site for this, please let me know.)

    Read the article

  • Can't get php+sqlite working

    - by facha
    Hi, everyone I'm struggling all morning to make php work with an sqlite database. Here is a piece of php code that I try to execute: #less /var/www/html/test.php <?php $db=new PDO("sqlite:/var/www/test.sql"); $sql = "insert into test (login,pass) values ('login','pass');"; $db->exec($sql); ?> Here is how I've done tests: # sqlite3 /var/www/test.sql sqlite> create table test (login varchar,pass varchar); #chown apache:apache /var/www/test.sql #chmod 644 /var/www/test.sql Here is the stuff that drives me mad: When I execute from command line: #php test.php everything goes well. Sql is being executed and I can see a new row appear in the database. When I execute the same script from a browser - sql is not being executed. I don't get a new row in the database. There are no errors in the apache log file. Please, help

    Read the article

  • Functions registered with ExternalInterface.addCallback not available in Javascript

    - by Selene
    I'm working on a Flash game that needs to call some Javascript on the page and get data back from it. Calling Javascript from Flash works. Calling the Flash functions from Javascript (often) doesn't. I'm using the Gaia framework. What happens: The swf is loaded in with SWFObject There's a button in the Flash file. On click, it uses ExternalInterface.call() to call a Javascript function. This works. The Javascript function calls a Flash function that was exposed with ExternalInterface.addCallback(). Sometimes, the Javascript produces the following error: TypeError: myFlash.testCallback is not a function. When the error happens, it affects all functions registered with addCallback(). Gaia and some of its included libraries use addCallback(), and calling those functions from Javascript also produces the TypeError. Waiting a long time before pressing the button in Flash doesn't solve the error. Having Flash re-try addCallback() periodically doesn't solve the error When the error occurs, ExternalInterface.available = true and ExternalInterface.objectID contains the correct name for the Flash embed object. When the error occurs, document.getElementById('myflashcontent') correctly returns the Flash embed object. From my Page class: public class MyPage extends AbstractPage { // declarations of stage instances and class variables // other functions override public function transitionIn():void { send_button.addEventListener(MouseEvent.MOUSE_UP, callJS); exposeCallbacks(); super.transitionIn(); } private function exposeCallbacks():void { trace("exposeCallbacks()"); if (ExternalInterface.available) { trace("ExternalInterface.objectID: " + ExternalInterface.objectID); try { ExternalInterface.addCallback("testCallback", simpleTestCallback); trace("called ExternalInterface.addCallback"); } catch (error:SecurityError) { trace("A SecurityError occurred: " + error.message + "\n"); } catch (error:Error) { trace("An Error occurred: " + error.message + "\n"); } } else { trace("exposeCallbacks() - ExternalInterface not available"); } } private function simpleTestCallback(str:String):void { trace("simpleTestCallback(str=\"" + str + "\")"); } private function callJS(e:Event):void { if (ExternalInterface.available) { ExternalInterface.call("sendTest", "name", "url"); } else { trace("callJS() - ExternalInterface not available"); } } } My Javascript: function sendTest(text, url) { var myFlash = document.getElementById("myflashcontent"); var callbackStatus = ""; callbackStatus += '\nmyFlash[testCallback]: ' + myFlash['testCallback']; //console.log(callbackStatus); var errors = false; try { myFlash.testCallback("test string"); } catch (err) { alert("Error: " + err.toString()); error = true; } if (!error) { alert("Success"); } } var params = { quality: "high", scale: "noscale", wmode: "transparent", allowscriptaccess: "always", bgcolor: "#000000" }; var flashVars = { siteXML: "xml/site.xml" }; var attributes = { id: "myflashcontent", name: "myflashcontent" }; // load the flash movie. swfobject.embedSWF("http://myurl.com/main.swf?v2", "myflashcontent", "728", "676", "10.0.0", serverRoot + "expressInstall.swf", flashVars, params, attributes, function(returnObj) { console.log('Returned ' + returnObj.success); if (returnObj.success) { returnObj.ref.focus(); } });

    Read the article

  • Log4j: Events appear in the wrong logfile

    - by Markus
    Hi there! To be able to log and trace some events I've added a LoggingHandler class to my java project. Inside this class I'm using two different log4j logger instances - one for logging an event and one for tracing an event into different files. The initialization block of the class looks like this: public void initialize() { System.out.print("starting logging server ..."); // create logger instances logLogger = Logger.getLogger("log"); traceLogger = Logger.getLogger("trace"); // create pattern layout String conversionPattern = "%c{2} %d{ABSOLUTE} %r %p %m%n"; try { patternLayout = new PatternLayout(); patternLayout.setConversionPattern(conversionPattern); } catch (Exception e) { System.out.println("error: could not create logger layout pattern"); System.out.println(e); System.exit(1); } // add pattern to file appender try { logFileAppender = new FileAppender(patternLayout, logFilename, false); traceFileAppender = new FileAppender(patternLayout, traceFilename, false); } catch (IOException e) { System.out.println("error: could not add logger layout pattern to corresponding appender"); System.out.println(e); System.exit(1); } // add appenders to loggers logLogger.addAppender(logFileAppender); traceLogger.addAppender(traceFileAppender); // set logger level logLogger.setLevel(Level.INFO); traceLogger.setLevel(Level.INFO); // start logging server loggingServer = new LoggingServer(logLogger, traceLogger, serverPort, this); loggingServer.start(); System.out.println(" done"); } To make sure that only only thread is using the functionality of a logger instance at the same time each logging / tracing method calls the logging method .info() inside a synchronized-block. One example looks like this: public void logMessage(String message) { synchronized (logLogger) { if (logLogger.isInfoEnabled() && logFileAppender != null) { logLogger.info(instanceName + ": " + message); } } } If I look at the log files, I see that sometimes a event appears in the wrong file. One example: trace 10:41:30,773 11080 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1267093 to vehicle 1055293 (slaveControl 1) trace 10:41:30,784 11091 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1156513 to vehicle 1105792 (slaveControl 1) trace 10:41:30,796 11103 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1104306 to vehicle 1055293 (slaveControl 1) trace 10:41:30,808 11115 INFO masterControl(192.168.2.21): vehicle 1327879 was pushed to slave control 1 10:41:30,808 11115 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1101572 to vehicle 106741 (slaveControl 1) trace 10:41:30,820 11127 INFO masterControl(192.168.2.21): string broadcast message was pushed from 1055293 to vehicle 1104306 (slaveControl 1) I think that the problem occures everytime two event happen at the same time (here: 10:41:30,808). Does anybody has an idea how to solve my problem? I already tried to add a sleep() after the method call, but that doesn't helped ... BR, Markus Edit: logtrace 11:16:07,75511:16:07,755 1129711297 INFOINFO masterControl(192.168.2.21): string broadcast message was pushed from 1291400 to vehicle 1138272 (slaveControl 1)masterControl(192.168.2.21): vehicle 1333770 was added to slave control 1 or log 11:16:08,562 12104 INFO 11:16:08,562 masterControl(192.168.2.21): string broadcast message was pushed from 117772 to vehicle 1217744 (slaveControl 1) 12104 INFO masterControl(192.168.2.21): vehicle 1169775 was pushed to slave control 1 Edit 2: It seems like the problem only occurs if logging methods are called from inside a RMI thread (my client / server exchange information using RMI connections). ... Edit 3: I solved the problem by myself: It seems like log4j is NOT completely thread-save. After synchronizing all log / trace methods using a separate object everything is working fine. Maybe the lib is writing the messages to a thread-unsafe buffer before writing them to file?

    Read the article

  • Sun Fire X4270 M3 SAP Enhancement Package 4 for SAP ERP 6.0 (Unicode) Two-Tier Standard Sales and Distribution (SD) Benchmark

    - by Brian
    Oracle's Sun Fire X4270 M3 server achieved 8,320 SAP SD Benchmark users running SAP enhancement package 4 for SAP ERP 6.0 with unicode software using Oracle Database 11g and Oracle Solaris 10. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat both IBM Flex System x240 and IBM System x3650 M4 server running DB2 9.7 and Windows Server 2008 R2 Enterprise Edition. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the HP ProLiant BL460c Gen8 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 6%. The Sun Fire X4270 M3 server using Oracle Database 11g and Oracle Solaris 10 beat Cisco UCS C240 M3 server running SQL Server 2008 and Windows Server 2008 R2 Datacenter Edition by 9%. The Sun Fire X4270 M3 server running Oracle Database 11g and Oracle Solaris 10 beat the Fujitsu PRIMERGY RX300 S7 server using SQL Server 2008 and Windows Server 2008 R2 Enterprise Edition by 10%. Performance Landscape SAP-SD 2-Tier Performance Table (in decreasing performance order). SAP ERP 6.0 Enhancement Pack 4 (Unicode) Results (benchmark version from January 2009 to April 2012) System OS Database Users SAPERP/ECCRelease SAPS SAPS/Proc Date Sun Fire X4270 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Oracle Solaris 10 Oracle Database 11g 8,320 20096.0 EP4(Unicode) 45,570 22,785 10-Apr-12 IBM Flex System x240 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,960 20096.0 EP4(Unicode) 43,520 21,760 11-Apr-12 HP ProLiant BL460c Gen8 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,865 20096.0 EP4(Unicode) 42,920 21,460 29-Mar-12 IBM System x3650 M4 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE DB2 9.7 7,855 20096.0 EP4(Unicode) 42,880 21,440 06-Mar-12 Cisco UCS C240 M3 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 DE SQL Server 2008 7,635 20096.0 EP4(Unicode) 41,800 20,900 06-Mar-12 Fujitsu PRIMERGY RX300 S7 2xIntel Xeon E5-2690 @2.90GHz 128 GB Windows Server 2008 R2 EE SQL Server 2008 7,570 20096.0 EP4(Unicode) 41,320 20,660 06-Mar-12 Complete benchmark results may be found at the SAP benchmark website http://www.sap.com/benchmark. Configuration and Results Summary Hardware Configuration: Sun Fire X4270 M3 2 x 2.90 GHz Intel Xeon E5-2690 processors 128 GB memory Sun StorageTek 6540 with 4 * 16 * 300GB 15Krpm 4Gb FC-AL Software Configuration: Oracle Solaris 10 Oracle Database 11g SAP enhancement package 4 for SAP ERP 6.0 (Unicode) Certified Results (published by SAP): Number of benchmark users: 8,320 Average dialog response time: 0.95 seconds Throughput: Fully processed order line: 911,330 Dialog steps/hour: 2,734,000 SAPS: 45,570 SAP Certification: 2012014 Benchmark Description The SAP Standard Application SD (Sales and Distribution) Benchmark is a two-tier ERP business test that is indicative of full business workloads of complete order processing and invoice processing, and demonstrates the ability to run both the application and database software on a single system. The SAP Standard Application SD Benchmark represents the critical tasks performed in real-world ERP business environments. SAP is one of the premier world-wide ERP application providers, and maintains a suite of benchmark tests to demonstrate the performance of competitive systems on the various SAP products. See Also SAP Benchmark Website Sun Fire X4270 M3 Server oracle.com OTN Oracle Solaris oracle.com OTN Oracle Database 11g Release 2 Enterprise Edition oracle.com OTN Disclosure Statement Two-tier SAP Sales and Distribution (SD) standard SAP SD benchmark based on SAP enhancement package 4 for SAP ERP 6.0 (Unicode) application benchmark as of 04/11/12: Sun Fire X4270 M3 (2 processors, 16 cores, 32 threads) 8,320 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, Oracle 11g, Solaris 10, Cert# 2012014. IBM Flex System x240 (2 processors, 16 cores, 32 threads) 7,960 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012016. IBM System x3650 M4 (2 processors, 16 cores, 32 threads) 7,855 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, DB2 9.7, Windows Server 2008 R2 EE, Cert# 2012010. Cisco UCS C240 M3 (2 processors, 16 cores, 32 threads) 7,635 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 DE, Cert# 2012011. Fujitsu PRIMERGY RX300 S7 (2 processors, 16 cores, 32 threads) 7,570 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012008. HP ProLiant DL380p Gen8 (2 processors, 16 cores, 32 threads) 7,865 SAP SD Users, 2 x 2.90 GHz Intel Xeon E5-2690, 128 GB memory, SQL Server 2008, Windows Server 2008 R2 EE, Cert# 2012012. SAP, R/3, reg TM of SAP AG in Germany and other countries. More info www.sap.com/benchmark

    Read the article

  • RegexClean Transformation

    Use the power of regular expressions to cleanse your data right there inside the Data Flow. This transformation includes a full user interface for simple configuration, as well as advanced features such as error output configuration. Two regular expressions are used, a match expression and a replace expression. The transformation is designed around the named capture groups or match groups, and even supports multiple expressions. This allows for rich and complex expressions to be built, all through an easy to reuse transformation where a bespoke Script Component was previously the only alternative. Some simple properties are available for each column selected – Behaviour The two behaviour modes offer similar functionality but with a difference. Replace, replaces tokens with the input, and Emit overwrites the whole string. Cascade Cascade allows you to define multiple expressions, each on a new line. The match expression will be processed into one operation per line, which are then processed in order at run-time. Multiple replace expressions can also be specified, again each on a new line. If there is no corresponding replace expression for a match expression line, then the last replace expression will be used instead. It is common to have multiple match expressions, but only a single replace expression. Match Expression The expression used to define the named capture groups. This is where you can analyse the data, and tag or name elements within it as found by the match expression. Replace Expression The replace determines the final output. It will reference the named groups from the match expression and assembles them into the final output. If you want to use regular expressions to validate data then try the Regular Expression Transformation. Quick Start Guide Select a column. A new output column is created for each selected column; there is no option for in-place replacement of column values. One input column can be used to populate multiple output columns, just select the column again in the lower grid, using the Input Columns drop-down selector. Amend the output column name and size as required. They default to the same as the input column selected. Amend the behaviour as required, the default is Replace. Amend the cascade option as required, the default is true. Finally enter your match and replace regular expressions Quick Sample #1 Parse an email address and extract the user and domain portions. Format as a web address passing the user portion as a URL parameter. This uses two match groups, user and host, which correspond to the text before the @ and after it respectively. Behaviour is Emit, and cascade of false, we only have a single match expression. Match Expression ^(?<user>[^@]+)@(?<host>.+)$ Replace Expression - http://www.${host}?user=${user} Results Sample Input Sample Output [email protected] http://www.adventure-works.com?user=zheng0 The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the RegexClean Transformation from the list. Downloads The RegexClean Transformation is available for both SQL Server 2005 and SQL Server 2008. Please choose the version to match your SQL Server version, or you can install both versions and use them side by side if you have both SQL Server 2005 and SQL Server 2008 installed. RegexClean Transformation for SQL Server 2005 RegexClean Transformation for SQL Server 2008 Version History SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) Screenshot

    Read the article

  • Data Mining Resources

    - by Dejan Sarka
    There are many different types of analyses, each one with its own pros and cons. Relational reports have a predefined structure, and end users cannot change it. They are simple to use for end users. Reports can use real-time data and snapshots of data to show the state of a report at specific points in time. One of the drawbacks is that report authoring is limited to IT pros and advanced users. Any kind of dynamic restructuring is very limited. If real-time data is used for a report, the report has a negative impact on the performance of the source system. Processing of the reports might be slow because the data comes from relational database management systems, which are not optimized for reporting only. If you create a semantic model of your data, your end users can create ad-hoc report structures. However, the development is more complex because a developer is needed to create these semantic models. For OLAP, you typically use specialized database management systems. You get lightning speed of analyses. End users can use rich and thin clients to interactively change the structure of the report. Typically, they do it graphically. However, the development of an OLAP system is many times quite complex. It involves the preparation and maintenance of an enterprise data warehouse and OLAP cubes. In order to exploit the possibility of real-time restructuring of reports, the users must be both active and educated. The data is usually stale, as it is loaded into data warehouses and OLAP cubes with a scheduled process. With data mining, a structure is not selected in advance; it searches for the structure. As a result, data mining can give you the most valuable results because you can discover patterns you did not expect. A data mining model structure is limited only by the attributes that you use to train the model. One of the drawbacks is that a lot of knowledge is needed for a successful data mining project. End users have to understand the results. Subject matter experts and IT professionals need to understand business problem thoroughly. The development might be sometimes even more complex than the development of OLAP cubes. Each type of analysis has its own place in an enterprise system. SQL Server has tools for all kinds of analyses. However, data mining is the most advanced way of analyzing the data; this is the “I” in BI. In order to get the most out of it, you need to learn quite a lot. In this blog post, I am gathering together resources for learning, including forthcoming events. Books Multiple authors: SQL Server MVP Deep Dives – I wrote an introductory data mining chapter there. Erik Veerman, Teo Lachev and Dejan Sarka: MCTS Self-Paced Training Kit (Exam 70-448): Microsoft SQL Server 2008 - Business Intelligence Development and Maintenance – you can find a good overview of a complete BI solution, including data mining, in this book. Jamie MacLennan, ZhaoHui Tang, and Bogdan Crivat: Data Mining with Microsoft SQL Server 2008 – can’t miss this book if you want to mine your data with SQL Server tools. Michael Berry, Gordon Linoff: Mastering Data Mining: The Art and Science of Customer Relationship Management – data mining from both, business and technical perspective. Dorian Pyle: Data Preparation for Data Mining – an in-depth book about data preparation. Thomas and Ronald Wonnacott: Introductory Statistics – if you thought that you could get away without statistics, then you are not serious about data mining. Jiawei Han and Micheline Kamber: Data Mining Concepts and Techniques – in-depth explanation of the most popular data mining algorithms. Michael Berry and Gordon Linoff: Data Mining Techniques – another book that explains data mining algorithms, more fro a business perspective. Paolo Guidici: Applied Data Mining – very mathematical book, only if you enjoy statistics and mathematics in general. Forthcoming presentations I am presenting two data mining related sessions during the PASS Summit in Charlotte, NC: Wednesday, October 16th, 2013 - Fraud Detection: Notes from the Field – I am showing how to use data mining for a specific business problem. The presentation is based on real-life projects. Friday, October 18th: Excel 2013 Advanced Analytics – I am focusing on Excel Data Mining Add-ins, and how to use them together with Power Pivot and other add-ins. This is the most you can get out of Excel. Sinergija 2013, Belgrade, Serbia Tuesday, October 22nd: Excel 2013 Analytics to the Max – another presentation focusing on the most advanced analytics you can get in Excel. SQL Rally Amsterdam, Netherlands Thursday, November 7th: Advanced Analytics in Excel 2013 – and again I am presenting about data mining in Excel. Why three different titles for the same presentation? I don’t know, I guess I forgot the name I proposed every time right after I sent the proposal. Courses Data Mining with SQL Server 2012 – I wrote a 3-day course for SolidQ. If you are interested in this course, which I could also deliver in a shorter seminar way, you can contact your closes SolidQ subsidiary, or, of course, me directly on addresses [email protected] or [email protected]. This course could also complement the existing courseware portfolio of training providers, which are welcome to contact me as well. OK, now you know: no more excuses, start learning data mining, get the most out of your data

    Read the article

  • C# 4.0: Covariance And Contravariance In Generics Made Easy

    - by Paulo Morgado
    In my last post, I went through what is variance in .NET 4.0 and C# 4.0 in a rather theoretical way. Now, I’m going to try to make it a bit more down to earth. Given: class Base { } class Derived : Base { } Such that: Trace.Assert(typeof(Base).IsClass && typeof(Derived).IsClass && typeof(Base).IsGreaterOrEqualTo(typeof(Derived))); Covariance interface ICovariantIn<out T> { } Trace.Assert(typeof(ICovariantIn<Base>).IsGreaterOrEqualTo(typeof(ICovariantIn<Derived>))); Contravariance interface ICovariantIn<out T> { } Trace.Assert(typeof(IContravariantIn<Derived>).IsGreaterOrEqualTo(typeof(IContravariantIn<Base>))); Invariance interface IInvariantIn<T> { } Trace.Assert(!typeof(IInvariantIn<Base>).IsGreaterOrEqualTo(typeof(IInvariantIn<Derived>)) && !typeof(IInvariantIn<Derived>).IsGreaterOrEqualTo(typeof(IInvariantIn<Base>))); Where: public static class TypeExtensions { public static bool IsGreaterOrEqualTo(this Type self, Type other) { return self.IsAssignableFrom(other); } }

    Read the article

  • Some Original Expressions

    - by Phil Factor
    Guest Editorial for Simple-Talk newsletterIn a guest editorial for the Simple-Talk Newsletter, Phil Factor wonders if we are still likely to find some more novel and unexpected ways of using the newer features of Transact SQL: or maybe in some features that have always been there! There can be a great deal of fun to be had in trying out recent features of SQL Expressions to see if  they provide new functionality.  It is surprisingly rare to find things that couldn’t be done before, but in a different   and more cumbersome way; but it is great to experiment or to read of someone else making that discovery.  One such recent feature is the ‘table value constructor’, or ‘VALUES constructor’, that managed to get into SQL Server 2008 from Standard SQL.  This allows you to create derived tables of up to 1000 rows neatly within select statements that consist of  lists of row values.  E.g. SELECT Old_Welsh, number FROM (VALUES ('Un',1),('Dou',2),('Tri',3),('Petuar',4),('Pimp',5),('Chwech',6),('Seith',7),('Wyth',8),('Nau',9),('Dec',10)) AS WelshWordsToTen (Old_Welsh, number) These values can be expressions that return single values, including, surprisingly, subqueries. You can use this device to create views, or in the USING clause of a MERGE statement. Joe Celko covered  this here and here.  It can become extraordinarily handy to use once one gets into the way of thinking in these terms, and I’ve rewritten a lot of routines to use the constructor, but the old way of using UNION can be used the same way, but is a little slower and more long-winded. The use of scalar SQL subqueries as an expression in a VALUES constructor, and then applied to a MERGE, has got me thinking. It looks very clever, but what use could one put it to? I haven’t seen anything yet that couldn’t be done almost as  simply in SQL Server 2000, but I’m hopeful that someone will come up with a way of solving a tricky problem, just in the same way that a freak of the XML syntax forever made the in-line  production of delimited lists from an expression easy, or that a weird XML pirouette could do an elegant  pivot-table rotation. It is in this sort of experimentation where the community of users can make a real contribution. The dissemination of techniques such as the Number, or Tally table, or the unconventional ways that the UPDATE statement can be used, has been rapid due to articles and blogs. However, there is plenty to be done to explore some of the less obvious features of Transact SQL. Even some of the features introduced into SQL Server 2000 are hardly well-known. Certain operations on data are still awkward to perform in Transact SQL, but we mustn’t, I think, be too ready to state that certain things can only be done in the application layer, or using a CLR routine. With the vast array of features in the product, and with the tools that surround it, I feel that there is generally a way of getting tricky things done. Or should we just stick to our lasts and push anything difficult out into procedural code? I’d love to know your views.

    Read the article

  • Some OBI EE Tricks and Tips in the Admin Tool By Gerry Langton

    - by hamsun
    How to set the log level from a Session variable Initialization block As we know it is normal to set the log level non-zero for a particular user when we wish to debug problems. However sometimes it is inconvenient to go into each user’s properties in the Admin tool and update the log level. So I am showing a method which allows the log level to be set for all users via a session initialization block. This is particularly useful for anyone wanting an alternative way to set the log level. The screen shots shown are using the OBIEE 11g SampleApp demo but are applicable to any environment. Open the appropriate rpd in on-line mode and navigate to Manage Variables. Select Session Initialization Blocks, right click in the white space and create a New Initialization Block. I called the Initialization block Set_Loglevel . Now click on ‘Edit Data Source’ to enter the SQL. Chose the ‘Use OBI EE Server’ option for the SQL. This means that the SQL provided must use tables which have been defined in the Physical layer of the RPD, and whilst there is no need to provide a connection pool you must work in On-Line mode. The SQL can access any of the RPD tables and is purely used to return a value of 2. The ‘Test’ button confirms that the SQL is valid. Next, click on the ‘Edit Data Target’ button to add the LOGLEVEL variable to the initialization block. Check the ‘Enable any user to set the value’ option so that this will work for any user. Click OK and the following message will display as LOGLEVEL is a system session variable: Click ‘Yes’. Click ‘OK’ to save the Initialization block. Then check in the On-LIne changes. To test that LOGLEVEL has been set, log in to OBIEE using an administrative login (e.g. weblogic) and reload server metadata, either from the Analysis editor or from Administration > Reload Files and Metadata link. Run a query then navigate to Administration > Manage Sessions and click ‘View Log’ for the query just issued (which should be approximately the last in the list). A log file should exist and with LOGLEVEL set to 2 should include both logical and physical sql. If more diagnostic information is required then set LOGLEVEL to a higher value. If logging is required only for a particular analysis then an alternative method can be used directly from the Analysis editor. Edit the analysis for which debugging is required and click on the Advanced tab. Scroll down to the Advanced SQL clauses section and enter the following in the Prefix box: SET VARIABLE LOGLEVEL = 2; Click the ‘Apply SQL’ button. The SET VARIABLE statement will now prefix the Analysis’s logical SQL. So that any time this analysis is run it will produce a log. You can find information about training for Oracle BI EE products here or in the OU Learning Paths. Please send me an email at [email protected] if you have any further questions. About the Author: Gerry Langton started at Siebel Systems in 1999 working as a technical instructor teaching both Siebel application development and also Siebel Analytics (which subsequently became Oracle BI EE). From 2006 Gerry has worked as Senior Principal Instructor within Oracle University specialising in Oracle BI EE, Oracle BI Publisher and Oracle Data Warehouse development for BI.

    Read the article

  • Timeout Considerations for Solicit Response

    - by Michael Stephenson
    Background One of the clients I work with had been experiencing some issues for a while surrounding web service timeouts.  It's been a little challenging to work through the problems due to limitations in the diagnostic information available from one of the applications, but I learned some interesting things while troubleshooting the problem which don't seem to have been discussed much in the community so I thought I'd share my findings. In the scenario we have BizTalk trying to make calls to a .net web service which was exposed as a WSE 2 endpoint.  In the process BizTalk will try to make a large number of concurrent web service calls to the application, and the backend application has more than enough infrastructure and capability to handle the load. We have configured the <ConnectionManagement> section of the BizTalk configuration file to support up to 100 concurrent connections from each of our 2 BizTalk send servers to the web servers of the application. The problem we were facing was that the BizTalk side was reporting a significant number of timeouts when calling the web service.   One of the biggest issues was the challenge of being able to correlate a message from BizTalk to the IIS log in the .net application and the custom logs in the application especially when there was a fairly large number of servers hosting the web services.  However the key moment came when we were able to identify a specific call which had taken 40 seconds to execute on the server (yes a long time I know but that's a different story!).  Anyway we were able to identify that this had timed out on the BizTalk side.  Based on the normal 2 minute timeout we knew something unexpected was going on. From here I decided to do some experimentation and I wanted to start outside of BizTalk because my hunch was this was not a BizTalk behaviour but something which was being highlighted by BizTalk because of our large load.     Server-side - Sample Web Service To begin with I created a sample web service.  Nothing special just a vanilla asmx web service hosted in IIS6 on Windows 2003 Standard Edition.  The web service is just a hello world style web service as shown in the below picture.  The only key feature is that the server side web method has a 30 second sleep in it and will trace out some information before and after the thread is set to sleep.      In the configuration for this web service there again is nothing special it's pretty much the most plain simple web service you could build. Client-Side To begin looking at what was happening with our example I created a number of different ways to consume the web service. SoapHttpClientProtocol Example I created a small application which would use a normal proxy generated to call the web service.  It would iterate around a loop and make calls using the begin/end methods so I can do this asynchronously.  I would do a loop of 20 calls with the ConnectionManager configuration section supporting only 5 concurrent connections to the server.     <connectionManagement> <remove address="*"/> <add address = "*" maxconnection = "12" /> <add address = "http://<ServerName>" maxconnection = "5" />                         </connectionManagement> </system.net>     The below picture shows an example of the service calling code, key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service         The Test I would run the client and execute 21 calls to the web service.   The Results  Below is the client side trace showing what's happening on the client. In the below diagram is the web service side trace showing what's happening on the server Some observations on the results are: All of the calls were successful from the clients perspective You could see the next call starting on the server as soon as the previous one had completed Calls took significantly longer than 40 seconds from the start of our call to the return. In fact call 20 took 2 minutes and 30 seconds from the perspective of my code to execute even though I had set the timeout to 40 seconds     WSE 2 Sample In the second example I used the exact same code to call the web service again with a single exception that I modified the web service proxy to derive from WebServiceClient protocol which is part of WSE 2 (using SP3).  The below picture shows the basic code and the key points are: I have configured the timeout of 40 seconds for the proxy I am using the asynchronous methods on the proxy to call the web service        The Test This test would execute 21 calls from the client to the web service.   The Results  The below trace is from the client side: The below trace is from the server side:   Some observations on the trace results for this scenario are: With call 4 if you look at the server side trace it did not start executing on the server for a number of seconds after the other 4 initial calls which were accepted by the server. I re-ran the test and this happened a couple of times and not on most others so at this point I'm just putting this down to something unexpected happening on the development machine and we will leave this observation out of scope of this article. You can see that the client side trace statement executed almost immediately in all cases All calls after the initial few calls would timeout On the client side the calls that did timeout; timed out in a longer duration than the 40 seconds we set as the timeout You can see that as calls were completing on the server the next calls were starting to come through The calls that timed out on the client did actually connect to the server and their server side execution completed successfully     Elaboration on the findings Based on the above observations I have drawn the below sequence diagram to illustrate conceptually what is happening.  Everything except the final web service object is on the client side of the call. In the diagram below I've put two notes on the Web Service Proxy to show the two different places where the different base classes seem to start their timeout counters. From the earlier samples we can work out that the timeout counter for the WSE web service proxy starts before the one for the SoapHttpClientProtocol proxy and the WSE one includes the time to get a connection from the pool; whereas the Soap proxy timeout just covers the method execution. One interesting observation is if we rerun the above sample and increase the number of calls from 21 to 100,000 then for the WSE sample we will see a similar pattern where everything after the first few calls will timeout on the client as soon as it makes a connection to the server whereas the soap proxy will happily plug away and process all of the calls without a single timeout. I have actually set the sample running overnight and this did happen. At this point you are probably thinking the same thoughts I was at the time about the differences in behaviour and which is right and why are they different? I'm not sure there is a definitive answer to this in the documentation, or at least not that I could find! I think you just have to consider that they are different and they could have different effects depending on your messaging solution. In lots of situations this is just not an issue as your concurrent requests doesn't get to the situation where you end up throttling the web service calls on the client side, however this is definitely more common with an integration broker such as BizTalk where you often have high throughput requirements.  Some of the considerations you should make Based on this behaviour you should be aware of the following: In a .net application if you are making lots of concurrent web service calls from an application in an asynchronous manner your user may thing they are experiencing poor performance but you think your web service is working well. The problem could be that the client will have a default of 2 connections to remote servers so you should bear this in mind When you are developing a BizTalk solution or a .net solution with the WSE 2 stack you may experience timeouts under load and throttling the number of connections using the max connections element in the configuration file will not help you For an application using WSE2 or SoapHttpClientProtocol an expired timeout will not throw an error until after a connection to the server has been made so you should consider this in your transaction and durability patterns     Our Work Around In the short term for our specific scenario we know that we can handle this by just increasing our timeout value.  There is only a specific small window when we get lots of concurrent traffic that causes this scenario so we should be able to increase the timeout to take into consideration the additional client side wait, and on the odd occasion where we do get a timeout the BizTalk send port retry will handle this. What was causing our original problem was that for that short window we were getting a lot of retries which significantly increased the load on our send servers and highlighted the issue.  Longer Term Solution As a longer term solution this really gives us more ammunition to argue a migration to WCF. The application we are calling has some factors which limit the protocols we can use but with WCF we would have more control on the various timeout options because in WCF you can configure specific parts of the timeout. Summary I've had this blog post on my to do list for ages but hopefully it will be useful to some people to just understand this behaviour and to possibly help you with some performance issues you may have. I do not believe there is too much in the way of documentation particularly around WSE2 and ASMX in this area so again another bit of ammunition for migrating to WCF. I'll try to do a follow up post with the sample for WCF to show how this changes things.

    Read the article

  • Handling Indirection and keeping layers of method calls, objects, and even xml files straight

    - by Cervo
    How do you keep everything straight as you trace deeply into a piece of software through multiple method calls, object constructors, object factories, and even spring wiring. I find that 4 or 5 method calls are easy to keep in my head, but once you are going to 8 or 9 calls deep it gets hard to keep track of everything. Are there strategies for keeping everything straight? In particular, I might be looking for how to do task x, but then as I trace down (or up) I lose track of that goal, or I find multiple layers need changes, but then I lose track of which changes as I trace all the way down. Or I have tentative plans that I find out are not valid but then during the tracing I forget that the plan is invalid and try to consider the same plan all over again killing time.... Is there software that might be able to help out? grep and even eclipse can help me to do the actual tracing from a call to the definition but I'm more worried about keeping track of everything including the de-facto plan for what has to change (which might vary as you go down/up and realize the prior plan was poor). In the past I have dealt with a few big methods that you trace and pretty much can figure out what is going on within a few calls. But now there are dozens of really tiny methods, many just a single call to another method/constructor and it is hard to keep track of them all.

    Read the article

  • Cloud Backup: Getting the Users' Backs Up

    - by Tony Davis
    On Wednesday last week, Microsoft announced that as of July 1, all data transfers into its Microsoft Azure cloud will be free (though you have to pay for transferring data out). On Thursday last week, SQL Azure in Western Europe went down. It was a relatively short outage, but since SQL Azure currently provides no easy way to take a standard backup of a database and store it locally, many people had no recourse but to wait patiently for their cloud-based app to resume. It seems that Microsoft are very keen encourage developers to move their data onto their cloud, but are developers ready to do it, given that such basic backup capabilities are lacking? Recently on Simple-Talk, Mike Mooney described a perfect use case for the Microsoft Cloud. They had a simple web-based application with a SQL Server backend; they could move the application to Windows Azure, and the data into SQL Azure and in the process free themselves from much of the hassle surrounding management and scaling of the hardware, network and so on. It was a great fit and yet it nearly didn't happen; lack of support for the BACKUP command almost proved a show-stopper. Of course, backups of Azure databases are always and have always been taken automatically, for disaster recovery purposes, but these are strictly on-cloud copies and as of now it is not possible to use them to them to restore a database to a particular point in time. It seems that none of those clever Microsoft people managed to predict the need to perform basic backups of Azure databases so that copies could be stored locally, outside the Azure universe. At the very least, as Mike points out, performing a local backup before a new deployment is more or less mandatory. Microsoft did at least note the sound of gnashing teeth and, as a stop-gap measure, offered SQL Azure Database Copy which basically allows you to create an online clone of your database, but this doesn't allow for storing local archives of the data. To that end MS has provided SQL Azure Import/Export, to package up and export a database and its data, using BACPACs. These BACPACs do not guarantee transactional consistency; for example, if a child table is modified after the parent is copied, then the copied database will be in inconsistent state (meaning, to add to the fun, BACPACs need to be created from a database copy). In any event, widespread problems with BACPAC's evil cousin, the DACPAC have been well-documented, and it seems likely that many will also give BACPAC the bum's rush. Finally, in a TechEd 2011 presentation tagged "SQL Azure Advanced Administration", it was announced that "backup and restore" were coming in the next SQL Azure CTP. And yet this still doesn't mean that we'll get simple backups as DBAs know and love them. What it does mean, at least, is the ability to restore any given database to a point in time within a 2-week window. For the time being, if you want a local copy of your data and don't want to brave the BACPAC, one is left with SSIS or BCP, creative use of schema and data comparison tools, or use of SQL Azure Backup (currently in beta) in order to perform this simple but vital task. Cheers, Tony.

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible, however, if that is the only way I would also appreciate if anyone can point me in the right direction to a regex that is unlikely to be missing anything. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • Ruby on Rails: How to sanitize a string for SQL when not using find and other built-in methods?

    - by williamjones
    I'm trying to sanitize a string that involves user input without having to resort to manually crafting my own possibly buggy regex if possible. There are a number of methods in Rails that can allow you to enter in native SQL commands, how do people escape user input for those? The question I'm asking is a broad one, but in my particular case, I'm working with a column in my Postgres database that Rails does not natively understand as far as I know, the tsvector, which holds plain text search information. Rails is able to write and read from it as if it's a string, however, unlike a string, it doesn't seem to be automatically escaping it when I do things like vector= inside the model. For example, when I do model.name='::', where name is a string, it works fine. When I do model.vector='::' it errors out: ActiveRecord::StatementInvalid: PGError: ERROR: syntax error in tsvector: "::" "vectors" = E'::' WHERE "id" = 1 This seems to be a problem caused by lack of escaping of the semicolons, and I can manually set the vector='\:\:' fine. I also had the bright idea, maybe I can just call something like: ActiveRecord::Base.connection.execute "UPDATE medias SET vectors = ? WHERE id = 1", "::" However, this syntax doesn't work, because the raw SQL commands don't have access to find's method of escaping and inputting strings by using the ? mark. This strikes me as the same problem as calling connection.execute with any type of user input, as it all boils down to sanitizing the strings, but I can't seem to find any way to manually call Rails' SQL string sanitization methods. Can anyone provide any advice?

    Read the article

  • Script throwing unexpected operator when using mysqldump

    - by Astron
    A portion of a script I use to backup MySQL databases has stopped working correctly after upgrading a Debian box to 6.0 Squeeze. I have tested the backup code via CLI and it works fine. I believe it is in the selection of the databases before the backup occurs, possibly something to do with the $skipdb variable. If there is a better way to perform the function then I'm will to try something new. Any insight would be greatly appreciated. $ sudo ./script.sh [: 138: information_schema: unexpected operator [: 138: -1: unexpected operator [: 138: mysql: unexpected operator [: 138: -1: unexpected operator Using bash -x script here is one of the iterations: + for db in '$DBS' + skipdb=-1 + '[' test '!=' '' ']' + for i in '$IGGY' + '[' mysql == test ']' + : + '[' -1 == -1 ']' ++ /bin/date +%F + FILE=/backups/hostname.2011-03-20.mysql.mysql.tar.gz + '[' no = yes ']' + /usr/bin/mysqldump --single-transaction -u root -h localhost '-ppassword' mysql + /bin/tar -czvf /backups/hostname.2011-03-20.mysql.mysql.tar.gz mysql.sql mysql.sql + rm -f mysql.sql Here is the code. if [ $MYSQL_UP = "yes" ]; then echo "MySQL DUMP" >> /tmp/update.log echo "--------------------------------" >> /tmp/update.log DBS="$($MYSQL -u $MyUSER -h $MyHOST -p"$MyPASS" -Bse 'show databases')" for db in $DBS do skipdb=-1 if [ "$IGGY" != "" ] ; then for i in $IGGY do [ "$db" == "$i" ] && skipdb=1 || : done fi if [ "$skipdb" == "-1" ] ; then FILE="$DEST$HOST.`$DATE +"%F"`.$db.mysql.tar.gz" if [ $ENCRYPT = "yes" ]; then $MYSQLDUMP -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf - $db.sql | $OPENSSL enc -aes-256-cbc -salt -out $FILE.enc -k $ENC_PASS && rm -f $db.sql else $MYSQLDUMP --single-transaction -u $MyUSER -h $MyHOST -p"$MyPASS" $db > $db.sql && $TAR -czvf $FILE $db.sql && rm -f $db.sql fi fi done fi

    Read the article

< Previous Page | 699 700 701 702 703 704 705 706 707 708 709 710  | Next Page >