Search Results

Search found 4836 results on 194 pages for 'execution plans'.

Page 10/194 | < Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >

  • chroot for unsecure programs execution

    - by attwad
    Hi, I have never set-up a chroot-jailed environment before and I am afraid I need some help to do it well. To explain shortly what this is all about: I have a webserver to which users send python scripts to process various files that are stored on the server (the system is for Research purpose). Everyday a cron job starts the execution of the uploaded scripts via a command of this kind: /usr/bin/python script_file.py All of this is really insecure and I would like to create a jail in which I would copy the necessary files (uploaded scripts, files to process, python binary and dependencies). I already looked at various utilities to create jails but none of them seemed up-to-date or were lacking solid documentation (ie. the links proposed in How can I run an untrusted python script) Could anyone guide me to a viable solution to my problem? like a working example of a script that creates a jail, put some files in it and executes a python script? Thank you very much.

    Read the article

  • Does changing window.location stop execution of javascript?

    - by Aleksander Kmetec
    When writing server-side code you need to explicitly stop execution after sending a "Location: ..." header to the client or your code will continue to execute in the background. But what about when you change window.location in a client-side script? Does this immediately stop execution of the current script or is it up to the programmer to make sure that any code located after this call is not reached?

    Read the article

  • Execution time of ALTER COLUMN

    - by Tommy Jakobsen
    Having a table with 60 columns, and 200 rows. Altering a BIT column from NULL to NOT NULL, now has a running execution time of over 3 hours. Why is this taking so long? This is the query that I'm execution: ALTER TABLE tbl ALTER COLUMN col BIT NOT NULL Is there a faster way to do it, besides creating a new column, updating it with values from the old column, then dropping the old column and renaming the new one? This is on MS SQL Server 2005.

    Read the article

  • Disabling script max-execution-time in flex?

    - by Stefan Kendall
    How do I completely disable the max-execution-time for scripts in flex? The configurable max is 60 seconds, but I'm calling off to other interactive processes which will probably run much longer than that. Is there an easy way to disable the maximum script execution time across my entire application?

    Read the article

  • continue execution after running some .bat script

    - by senzacionale
    example: buil.bat script start /B webdev.webserver.exe /port:3234 /path:C:\projects\src\XYZWeb /VPATH:/XYZWeb when program run this script also execution stop. How to continue execution after running this script. Problem is that build.bat never end and you must manually close it. i look here http://ss64.com/nt/start.html but no command to conitinue executing while webdev.webserver is running.

    Read the article

  • How to time Java program execution speed

    - by George Mic
    Sorry if this sounds like a dumb question but how do you time the execution of a java program? I'm not sure what class I should use to do this. I'm kinda looking for something like: //Some timer starts here for (int i = 0; i < length; i++) { // Do something } //End timer here System.out.println("Total execution time: " + totalExecutionTime); Thanks

    Read the article

  • Resultset (getter/setter class) object not deleting old values at 2nd Time execution in swin

    - by user2384525
    I have summarizeData() method and called so many time for value retrieve. but first time is working file but 2nd time execution value is increasing in HashMap. void summarizeData() { HashMap outerMap = new HashMap(); ArrayList list = new ArrayList(dataClass.getData()); for (int indx = 0; indx < list.size(); indx++) { System.out.println("indx : " + indx); Resultset rs = new Resultset(); rs = (Resultset) list.get(indx); if (rs != null) { int id = rs.getTestCaseNumber(); if (id > 0) { Object isExists = outerMap.get(id); if (isExists != null) { //System.out.println("found entry so updating"); Resultset inRs = new Resultset(); inRs = (Resultset) isExists; if (inRs != null) { int totExec = inRs.getTestExecution(); int totPass = inRs.getTestCasePass(); int totFail = inRs.getTestCaseFail(); // System.out.println("totE :" + totExec + " totP:" + totPass + " totF:" + totFail); int newRsStat = rs.getTestCasePass(); if (newRsStat == 1) { totPass++; inRs.setTestCasePass(totPass); } else { totFail++; inRs.setTestCaseFail(totFail); } totExec++; // System.out.println("id : "+id+" totPass: "+totPass+" totFail:"+totFail); // System.out.println("key : " + id + " val : " + inRs.getTestCaseNumber() + " " + inRs.getTestCasePass() + " " + inRs.getTestCaseFail()); inRs.setTestExecution(totExec); outerMap.put(id, inRs); } } else { // System.out.println("not exist so new entry" + " totE:" + rs.getTestExecution() + " totP:" + rs.getTestCasePass() + " totF:" + rs.getTestCaseFail()); outerMap.put(id, rs); } } } else { System.out.println("rs null"); } } Output at 1st Execution: indx : 0 indx : 1 indx : 2 indx : 3 indx : 4 indx : 5 indx : 6 indx : 7 indx : 8 indx : 9 indx : 10 totE :1 totP:1 totF:0 indx : 11 totE :1 totP:1 totF:0 indx : 12 totE :1 totP:1 totF:0 indx : 13 totE :1 totP:1 totF:0 indx : 14 totE :1 totP:1 totF:0 indx : 15 totE :1 totP:1 totF:0 indx : 16 totE :1 totP:1 totF:0 indx : 17 totE :1 totP:1 totF:0 indx : 18 totE :1 totP:1 totF:0 indx : 19 totE :1 totP:1 totF:0 Output at 2nd Execution: indx : 0 indx : 1 indx : 2 indx : 3 indx : 4 indx : 5 indx : 6 indx : 7 indx : 8 indx : 9 indx : 10 totE :2 totP:2 totF:0 indx : 11 totE :2 totP:2 totF:0 indx : 12 totE :2 totP:2 totF:0 indx : 13 totE :2 totP:2 totF:0 indx : 14 totE :2 totP:2 totF:0 indx : 15 totE :2 totP:2 totF:0 indx : 16 totE :2 totP:2 totF:0 indx : 17 totE :2 totP:2 totF:0 indx : 18 totE :2 totP:2 totF:0 indx : 19 totE :2 totP:2 totF:0 while i required same output on every execution.

    Read the article

  • Does table columns increase select statement execution time

    - by paokg4
    I have 2 tables, same structure, same rows, same data but the first has more columns (fields). For example: I select the same 3 fields from both of them (SELECT a,b,c FROM mytable1 and then SELECT a,b,c FROM mytable2) I've tried to run those queries on 100,000 records (for each table) but at the end I got the same execution time (0.0006 sec) Do you know if the number of the columns (and in the end the size of the one table is bigger than the other) has to do something with the query execution time?

    Read the article

  • How to implement a timer callback that executes in the same execution context

    - by Waldorf
    Some programming environments like C++ builder have timer components with a callback function which executes in the same execution contexts as where the timer object is created. I was wondering how to do something similar in plain c++ with threading. Or are there any other ways to have a callback which is periodically called to perform some task and runs in the same execution context as the calling thread?

    Read the article

  • Fatal Execution Engine Error on the Windows2008 r2, IIS7.5

    - by user66524
    Hi Guys We are running some asp.net(3.5) applications on the Windows2008 r2, IIS7.5. Recently we got some event logs so difficult, we have not idea hope some guys can help. 1.EventID: 1334 (9-1-2011 8:41:57) Error message An error occurred during a process host idle check. Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace: at System.Collections.Hashtable.GetEnumerator() at System.Web.Hosting.ApplicationManager.IsIdle() at System.Web.Hosting.ProcessHost.IsIdle() 2.EventID: 1023 (9-1-2011 19:44:02) Error message .NET Runtime version 2.0.50727.4952 - Fatal Execution Engine Error (742B851A) (80131506) 3.EventID: 1000 (9-1-2011 19:44:03) Error message Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: mscorwks.dll, version: 2.0.50727.4952, time stamp: 0x4bebd49a Exception code: 0xc0000005 Fault offset: 0x0000c262 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 4.EventID: 5011 (9-1-2011 19:44:03) Error message A process serving application pool 'AppPoolName' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2552'. The data field contains the error number. 5.some info: we got the memory.hdmp(234MB) and minidump.mdmp(19.2) from control panel action center but I donot know how to use that :(

    Read the article

  • Fatal Execution Engine Error on the Windows2008 r2, IIS7.5

    - by user66524
    Hi Guys We are running some asp.net(3.5) applications on the Windows2008 r2, IIS7.5. Recently we got some event logs so difficult, we have not idea hope some guys can help. 1.EventID: 1334 (9-1-2011 8:41:57) Error message An error occurred during a process host idle check. Exception: System.AccessViolationException Message: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. StackTrace: at System.Collections.Hashtable.GetEnumerator() at System.Web.Hosting.ApplicationManager.IsIdle() at System.Web.Hosting.ProcessHost.IsIdle() 2.EventID: 1023 (9-1-2011 19:44:02) Error message .NET Runtime version 2.0.50727.4952 - Fatal Execution Engine Error (742B851A) (80131506) 3.EventID: 1000 (9-1-2011 19:44:03) Error message Faulting application name: w3wp.exe, version: 7.5.7600.16385, time stamp: 0x4a5bcd2b Faulting module name: mscorwks.dll, version: 2.0.50727.4952, time stamp: 0x4bebd49a Exception code: 0xc0000005 Fault offset: 0x0000c262 Faulting process id: 0x%9 Faulting application start time: 0x%10 Faulting application path: %11 Faulting module path: %12 Report Id: %13 4.EventID: 5011 (9-1-2011 19:44:03) Error message A process serving application pool 'AppPoolName' suffered a fatal communication error with the Windows Process Activation Service. The process id was '2552'. The data field contains the error number. 5.some info: we got the memory.hdmp(234MB) and minidump.mdmp(19.2) from control panel action center but I donot know how to use that :(

    Read the article

  • MySQL query, 2 similar servers, 2 minute difference in execution times

    - by mr12086
    I had a similar question on stack overflow, but it seems to be more server/mysql setup related than coding. The queries below all execute instantly on our development server where as they can take upto 2 minutes 20 seconds. The query execution time seems to be affected by home ambiguous the LIKE string's are. If they closely match a country that has few matches it will take less time, and if you use something like 'ge' for germany - it will take longer to execute. But this doesn't always work out like that, at times its quite erratic. Sending data appears to be the culprit but why and what does that mean. Also memory on production looks to be quite low (free memory)? Production: Intel Quad Xeon E3-1220 3.1GHz 4GB DDR3 2x 1TB SATA in RAID1 Network speed 100Mb Ubuntu Development Intel Core i3-2100, 2C/4T, 3.10GHz 500 GB SATA - No RAID 4GB DDR3 UPDATE 2 : mysqltuner output: [prod] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.61-0ubuntu0.10.04.1 [OK] Operating on 64-bit architecture -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 103M (Tables: 180) [--] Data in InnoDB tables: 491M (Tables: 19) [!!] Total fragmented tables: 38 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 77d 4h 6m 1s (53M q [7.968 qps], 14M conn, TX: 87B, RX: 12B) [--] Reads / Writes: 98% / 2% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (12K/53M) [OK] Highest usage of available connections: 22% (34/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/10.6M [OK] Key buffer hit rate: 98.7% (162M cached / 2M reads) [OK] Query cache efficiency: 20.7% (7M cached / 36M selects) [!!] Query cache prunes per day: 3934 [OK] Sorts requiring temporary tables: 1% (3K temp sorts / 230K sorts) [!!] Joins performed without indexes: 71068 [OK] Temporary tables created on disk: 24% (3M on disk / 13M total) [OK] Thread cache hit rate: 99% (690 created / 14M connections) [!!] Table cache hit rate: 0% (64 open / 85M opened) [OK] Open file limit used: 12% (128/1K) [OK] Table locks acquired immediately: 99% (16M immediate / 16M locks) [!!] InnoDB data size / buffer pool: 491.9M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Adjust your join queries to always utilize indexes Increase table_cache gradually to avoid file descriptor limits Variables to adjust: query_cache_size (> 16M) join_buffer_size (> 128.0K, or always use indexes with joins) table_cache (> 64) innodb_buffer_pool_size (>= 491M) [dev] -------- General Statistics -------------------------------------------------- [--] Skipped version check for MySQLTuner script [OK] Currently running supported MySQL version 5.1.62-0ubuntu0.11.10.1 [!!] Switch to 64-bit OS - MySQL cannot currently use all of your RAM -------- Storage Engine Statistics ------------------------------------------- [--] Status: +Archive -BDB -Federated +InnoDB -ISAM -NDBCluster [--] Data in MyISAM tables: 185M (Tables: 632) [--] Data in InnoDB tables: 967M (Tables: 38) [!!] Total fragmented tables: 73 -------- Security Recommendations ------------------------------------------- [OK] All database users have passwords assigned -------- Performance Metrics ------------------------------------------------- [--] Up for: 1d 2h 26m 9s (5K q [0.058 qps], 1K conn, TX: 4M, RX: 1M) [--] Reads / Writes: 99% / 1% [--] Total buffers: 58.0M global + 2.7M per thread (151 max threads) [OK] Maximum possible memory usage: 463.8M (11% of installed RAM) [OK] Slow queries: 0% (0/5K) [OK] Highest usage of available connections: 1% (2/151) [OK] Key buffer size / total MyISAM indexes: 16.0M/18.6M [OK] Key buffer hit rate: 99.9% (60K cached / 36 reads) [OK] Query cache efficiency: 44.5% (1K cached / 2K selects) [OK] Query cache prunes per day: 0 [OK] Sorts requiring temporary tables: 0% (0 temp sorts / 44 sorts) [OK] Temporary tables created on disk: 24% (162 on disk / 666 total) [OK] Thread cache hit rate: 99% (2 created / 1K connections) [!!] Table cache hit rate: 1% (64 open / 4K opened) [OK] Open file limit used: 8% (88/1K) [OK] Table locks acquired immediately: 100% (1K immediate / 1K locks) [!!] InnoDB data size / buffer pool: 967.7M/8.0M -------- Recommendations ----------------------------------------------------- General recommendations: Run OPTIMIZE TABLE to defragment tables for better performance Enable the slow query log to troubleshoot bad queries Increase table_cache gradually to avoid file descriptor limits Variables to adjust: table_cache (> 64) innodb_buffer_pool_size (>= 967M) UPDATE 1: When testing the queries listed here there is usually no more than one other query taking place, and usually none. Because production is actually handling apache requests that development gets very few of as it's only myself and 1 other who accesses it - could the 4GB of RAM be getting exhausted by using the single machine for both apache and mysql server? Production: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 24872 MB in 2.00 seconds = 12450.72 MB/sec Timing buffered disk reads: 368 MB in 3.00 seconds = 122.49 MB/sec sudo hdparm -tT /dev/sdb /dev/sdb: Timing cached reads: 24786 MB in 2.00 seconds = 12407.22 MB/sec Timing buffered disk reads: 350 MB in 3.00 seconds = 116.53 MB/sec Server version(mysql + ubuntu versions): 5.1.61-0ubuntu0.10.04.1 Development: sudo hdparm -tT /dev/sda /dev/sda: Timing cached reads: 10632 MB in 2.00 seconds = 5319.40 MB/sec Timing buffered disk reads: 400 MB in 3.01 seconds = 132.85 MB/sec Server version(mysql + ubuntu versions): 5.1.62-0ubuntu0.11.10.1 ORIGINAL DATA : This query is NOT the query in question but is related so ill post it. SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' And the explain plan for the above query is, run on both dev and production produce the same plan. +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ | 1 | SIMPLE | p2 | const | PRIMARY | PRIMARY | 4 | const | 1 | Using index | | 1 | SIMPLE | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | const | 796 | Using where | | 1 | SIMPLE | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using index | | 1 | SIMPLE | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 1 | SIMPLE | f2 | ref | form_project_id | form_project_id | 4 | const | 15 | Using where | | 1 | SIMPLE | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | +----+-------------+-------+--------+----------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+-------------+ This query takes 2 minutes ~20 seconds to execute. The query that is ACTUALLY being run on the server is this one: SELECT COUNT(*) AS num_results FROM (SELECT f.form_question_has_answer_id FROM form_question_has_answer f INNER JOIN project_company_has_user p ON f.form_question_has_answer_user_id = p.project_company_has_user_user_id INNER JOIN company c ON p.project_company_has_user_company_id = c.company_id INNER JOIN project p2 ON p.project_company_has_user_project_id = p2.project_id INNER JOIN user u ON p.project_company_has_user_user_id = u.user_id INNER JOIN form f2 ON p.project_company_has_user_project_id = f2.form_project_id WHERE (f2.form_template_name = 'custom' AND p.project_company_has_user_garbage_collection = 0 AND p.project_company_has_user_project_id = '29') AND (LCASE(c.company_country) LIKE '%ge%' OR LCASE(c.company_country) LIKE '%abcde%') AND f.form_question_has_answer_form_id = '174' GROUP BY f.form_question_has_answer_id;) dctrn_count_query; With explain plans (again same on dev and production): +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ | 1 | PRIMARY | NULL | NULL | NULL | NULL | NULL | NULL | NULL | Select tables optimized away | | 2 | DERIVED | p2 | const | PRIMARY | PRIMARY | 4 | | 1 | Using index | | 2 | DERIVED | f | ref | form_question_has_answer_form_id,form_question_has_answer_user_id | form_question_has_answer_form_id | 4 | | 797 | Using where | | 2 | DERIVED | p | ref | project_company_has_user_unique_key,project_company_has_user_user_id,project_company_has_user_company_id,project_company_has_user_project_id,project_company_has_user_garbage_collection | project_company_has_user_user_id | 4 | new_klarents.f.form_question_has_answer_user_id | 1 | Using where | | 2 | DERIVED | f2 | ref | form_project_id | form_project_id | 4 | | 15 | Using where | | 2 | DERIVED | c | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_company_id | 1 | Using where | | 2 | DERIVED | u | eq_ref | PRIMARY | PRIMARY | 4 | new_klarents.p.project_company_has_user_user_id | 1 | Using where; Using index | +----+-------------+-------+--------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+----------------------------------+---------+----------------------------------------------------+------+------------------------------+ On the production server the information I have is as follows. Upon execution: +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (2 min 14.28 sec) Show profile: +--------------------------------+------------+ | Status | Duration | +--------------------------------+------------+ | starting | 0.000016 | | checking query cache for query | 0.000057 | | Opening tables | 0.004388 | | System lock | 0.000003 | | Table lock | 0.000036 | | init | 0.000030 | | optimizing | 0.000016 | | statistics | 0.000111 | | preparing | 0.000022 | | executing | 0.000004 | | Sorting result | 0.000002 | | Sending data | 136.213836 | | end | 0.000007 | | query end | 0.000002 | | freeing items | 0.004273 | | storing result in query cache | 0.000010 | | logging slow query | 0.000001 | | logging slow query | 0.000002 | | cleaning up | 0.000002 | +--------------------------------+------------+ On development the results are as follows. +-------------+ | num_results | +-------------+ | 3 | +-------------+ 1 row in set (0.08 sec) Again the profile for this query: +--------------------------------+----------+ | Status | Duration | +--------------------------------+----------+ | starting | 0.000022 | | checking query cache for query | 0.000148 | | Opening tables | 0.000025 | | System lock | 0.000008 | | Table lock | 0.000101 | | optimizing | 0.000035 | | statistics | 0.001019 | | preparing | 0.000047 | | executing | 0.000008 | | Sorting result | 0.000005 | | Sending data | 0.086565 | | init | 0.000015 | | optimizing | 0.000006 | | executing | 0.000020 | | end | 0.000004 | | query end | 0.000004 | | freeing items | 0.000028 | | storing result in query cache | 0.000005 | | removing tmp table | 0.000008 | | closing tables | 0.000008 | | logging slow query | 0.000002 | | cleaning up | 0.000005 | +--------------------------------+----------+ If i remove user and/or project innerjoins the query is reduced to 30s. Last bit of information I have: Mysqlserver and Apache are on the same box, there is only one box for production. Production output from top: before & after. top - 15:43:25 up 78 days, 12:11, 4 users, load average: 1.42, 0.99, 0.78 Tasks: 162 total, 2 running, 160 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 50.4%sy, 0.0%ni, 49.5%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3772580k used, 265288k free, 243704k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207944k cached top - 15:44:31 up 78 days, 12:13, 4 users, load average: 1.94, 1.23, 0.87 Tasks: 160 total, 2 running, 157 sleeping, 0 stopped, 1 zombie Cpu(s): 0.2%us, 50.6%sy, 0.0%ni, 49.3%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3834300k used, 203568k free, 243736k buffers Swap: 3905528k total, 265384k used, 3640144k free, 1207804k cached But this isn't a good representation of production's normal status so here is a grab of it from today outside of executing the queries. top - 11:04:58 up 79 days, 7:33, 4 users, load average: 0.39, 0.58, 0.76 Tasks: 156 total, 1 running, 155 sleeping, 0 stopped, 0 zombie Cpu(s): 3.3%us, 2.8%sy, 0.0%ni, 93.9%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4037868k total, 3676136k used, 361732k free, 271480k buffers Swap: 3905528k total, 268736k used, 3636792k free, 1063432k cached Development: This one doesn't change during or after. top - 15:47:07 up 110 days, 22:11, 7 users, load average: 0.17, 0.07, 0.06 Tasks: 210 total, 2 running, 208 sleeping, 0 stopped, 0 zombie Cpu(s): 0.1%us, 0.2%sy, 0.0%ni, 99.7%id, 0.0%wa, 0.0%hi, 0.0%si, 0.0%st Mem: 4111972k total, 1821100k used, 2290872k free, 238860k buffers Swap: 4183036k total, 66472k used, 4116564k free, 921072k cached

    Read the article

  • .NET4: In-Process Side-by-Side Execution Explained

    - by emptyset
    Overview: I'm interested in learning more about the .NET4 "In-Process Side-by-Side Execution" of assemblies, and need additional information to help me demystify it. Motivation: The application in question is built against .NET2, and uses two third-party libraries that also work against .NET2. The application is deployed (via file copy) to client machines in a virtual environment that includes .NET2. Not my architecture, please bear with me. Goal: To see if it's possible to rebuild the application assemblies (or a subset) against .NET4, and ship the application as before, without changing the third-party libraries and including the .NET4 Client Profile (as described here) in the deployment. Steps Taken: The following articles were read, but didn't quite provide me enough information: In-Process Side-by-Side Execution: Browsed this article, and Scenario Two is the closest it comes to describing something that resembles my situation, but doesn't really cover it with any depth. ASP.NET Side-by-Side Execution Overview: This article covers a web application, but I'm dealing with a client WinForms application. CLR Team Blog: In-Process Side-by-Side: This is useful to explain how plug-ins to host processes function under .NET4, but I don't know if this applies to the third-party libraries. Further Steps: I'm also unclear on how to proceed upgrading a single .NET2 assembly to .NET4, with the overall application remaining in .NET2 (i.e. how to configure the solution/project files, if any special code needs to be included, etc.).

    Read the article

  • How to avoid shell execution of a Process?

    - by adeel amin
    When I start a process like process = Runtime.getRuntime().exec("gnome-terminal");, it starts shell execution. I want to stop shell execution and want to redirect I/O from process, can anybody tell how I can do this? My code is: public void start_process() { try { process= Runtime.getRuntime().exec("gnome-terminal"); pw= new PrintWriter(process.getOutputStream(),true); br=new BufferedReader(new InputStreamReader(process.getInputStream())); err=new BufferedReader(new InputStreamReader(process.getErrorStream())); } catch (Exception ioe) { System.out.println("IO Exception-> " + ioe); } } public void execution_command() { if(check==2) { try { boolean flag=thread.isAlive(); if(flag==true) thread.stop(); Thread.sleep(30); thread = new MyReader(br,tbOutput,err,check); thread.start(); }catch(Exception ex){ JOptionPane.showMessageDialog(null, ex.getMessage()+"1"); } } else { try { Thread.sleep(30); thread = new MyReader(br,tbOutput,err,check); thread.start(); check=2; }catch(Exception ex){ JOptionPane.showMessageDialog(null, ex.getMessage()+"1"); } } } private void jButton3ActionPerformed(java.awt.event.ActionEvent evt) { // TODO add your handling code here: command=tfCmd.getText().toString().trim(); pw.println(command); execution_command(); } When I enter some command in textfield and press execute button, nothing is displayed on my output textarea, how I can stop shell execution and redirect input and output?

    Read the article

  • maven scm plugin deleting output folder in every execution

    - by Udo Fholl
    Hi all, I need to download from 2 different svn locations to the same output directory. So i configured 2 different executions. But every time it executes a checkout deletes the output directory so it also deletes the already downloaded projects. Here is a sample of my pom.xml: <profiles> <profile> <id>checkout</id> <activation> <property> <name>checkout</name> <value>true</value> </property> </activation> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-scm-plugin</artifactId> <version>1.3</version> <configuration> <username>${svn.username}</username> <password>${svn.pass}</password> <checkoutDirectory>${path}</checkoutDirectory> <skipCheckoutIfExists /> </configuration> <executions> <execution> <id>checkout_a</id> <configuration> <connectionUrl>scm:svn:https://host_n/folder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> <execution> <id>checkout_b</id> <configuration> <connectionUrl>scm:svn:https://host_l/anotherfolder</connectionUrl> <checkoutDirectory>${path}</checkoutDirectory> </configuration> <phase>process-resources</phase> <goals> <goal>checkout</goal> </goals> </execution> </executions> </plugin> </plugins> </build> </profile> Is there any way to prevent the executions to delete the folder ${path} ? Thank you. PS: I cant format the pom.xml fragment correctly, sorry!

    Read the article

  • How does C#'s DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Given: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees "5/1/2010" to "5/4/2010." But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would allow the user to get the latest data and always pass the same end date to the sql proc--"5/5/2010" in this case. Please speak to this as well. Sample proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • How does DateTime.Now affect query plan caching in SQL Server?

    - by Bill Paetzke
    Question: Does passing DateTime.Now as a parameter to a proc prevent SQL Server from caching the query plan? If so, then is the web app missing out on huge performance gains? Possible Solution: I thought DateTime.Today.AddDays(1) would be a possible solution. It would pass the same end-date to the sql proc (per day). And the user would still get the latest data. Please speak to this as well. Given Example: Let's say we have a stored procedure. It reports data back to a user on a webpage. The user can set a date range. If the user sets today's date as the "end date," which includes today's data, the web app passes DateTime.Now to the sql proc. Let's say that one user runs a report--5/1/2010 to now--over and over several times. On the webpage, the user sees 5/1/2010 to 5/4/2010. But the web app passes DateTime.Now to the sql proc as the end date. So, the end date in the proc will always be different, although the user is querying a similar date range. Assume the number of records in the table and number of users are large. So any performance gains matter. Hence the importance of the question. Example proc and execution (if that helps to understand): CREATE PROCEDURE GetFooData @StartDate datetime @EndDate datetime AS SELECT * FROM Foo WHERE LogDate >= @StartDate AND LogDate < @EndDate Here's a sample execution using DateTime.Now: EXEC GetFooData '2010-05-01', '2010-05-04 15:41:27' -- passed in DateTime.Now Here's a sample execution using DateTime.Today.AddDays(1) EXEC GetFooData '2010-05-01', '2010-05-05' -- passed in DateTime.Today.AddDays(1) The same data is returned for both procs, since the current time is: 2010-05-04 15:41:27.

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Change Data Capture Webinar

    I am going to be doing a webinar with our friends at Attunity on Change Data Capture.  Attunity have a good story around this technology and you can use it in your SSIS loads to great effect. Join Attunity and Konesans/SQLIS for a Webinar on 17 September Space is limited. Reserve your Webinar seat now at: https://www1.gotomeeting.com/register/693735512 Want increased efficiency and real-time speed when conducting ETL loads? Need lower implementation costs while minimizing system impact? Learn how change data capture (CDC) technologies can reduce ETL load times. Allan Mitchell, Principal Consultant at Konesans and SQLServer MVP specialising in ETL, will explain CDC concepts and benefits and how CDC can dramatically reduce ETL load times. Ian Archibald, Pre-Sales Director EMEA for Attunity, will present and demonstrate Attunity's award-winning Oracle-CDC for SSIS, a fully-integrated SSIS solution for designing, deploying and managing Oracle CDC processes. Title: Change Data Capture - Reducing ETL Load Times Date: Thursday, September 17, 2009 Time: 10:00 AM - 11:00 AM BST ABOUT THE SPEAKERS: Allan Mitchell is the joint owner of Konesans Ltd, a UK based consultancy specializing in SQL Server, and most importantly SQL Server Integration Services. Having been working with SQL Server from 6.5 onwards, he has extensive experience in many aspects of SQL Server, but now focuses on the BI suite of tools. He is a SQL Server MVP, a frequent poster on the MS SSIS/DTS newsgroups, and runs the sqldts.com and sqlis.com resource sites. Ian Archibald, Attunity Pre-Sales Director EMEA, has worked in Attunity’s UK Office for 17 years. An expert in Attunity solutions, Ian has extensive knowledge of Attunity’s products and data integration & CDC technologies. After registering you will receive a confirmation email containing information about joining the Webinar. System Requirements PC-based attendees Required: Windows® 2000, XP Home, XP Pro, 2003 Server, Vista Macintosh®-based attendees Required: Mac OS® X 10.4 (Tiger®) or newer

    Read the article

  • Issuing Current Time Increments in StreamInsight (A Practical Example)

    The issuing of a Current Time Increment, Cti, in StreamInsight is very definitely one of the most important concepts to learn if you want your Streams to be responsive. A full discussion of how to issue Ctis is beyond the scope of this article but a very good explanation in addition to Books Online can be found in these three articles by a member of the StreamInsight team at Microsoft, Ciprian Gerea. Time in StreamInsight Series http://blogs.msdn.com/b/streaminsight/archive/2010/07/23/time-in-streaminsight-i.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/07/30/time-in-streaminsight-ii.aspx http://blogs.msdn.com/b/streaminsight/archive/2010/08/03/time-in-streaminsight-iii.aspx A lot of the problems I see with unresponsive or stuck streams on the MSDN Forums are to do with how Ctis are enqueued or in a lot of cases not enqueued. If you enqueue events and never enqueue a Cti then StreamInsight will be perfectly happy. You, on the other hand, will never see data on the output as you have not told StreamInsight to flush the stream. This article deals with a specific implementation problem I had recently whilst working on a StreamInsight project. I look at some possible options and discuss why they would not work before showing the way I solved the problem. The stream of data I was dealing with on this project was very bursty that is to say when events were flowing they came through very quickly and in large numbers (1000 events/sec), but when the stream calmed down it could be a few seconds between each event. When enqueuing events into the StreamInsight engne it is best practice to do so with a StartTime that is given to you by the system producing the event . StreamInsight processes events and it doesn't matter whether those events are being pushed into the engine by a source system or the events are being read from something like a flat file in a directory somewhere. You can apply the same logic and temporal algebra to both situations. Reading from a file is an excellent example of where the time of the event on the source itself is very important. We could be reading that file a long time after it was written. Being able to read the StartTime from the events allows us to define windows that will hold the correct sets of events. I was able to do this with my stream but this is where my problems started. Below is a very simple script to create a SQL Server table and populate it with sample data that will show exactly the problem I had. CREATE TABLE [dbo].[t] ( [c1] [int] PRIMARY KEY, [c2] [datetime] NULL ) INSERT t VALUES (1,'20100810'),(2,'20100810'),(3,'20100810') Column c2 defines the StartTime of the event on the source and as you can see the values in all 3 rows of data is the same. If we read Ciprian’s articles we know that we can define how Ctis get injected into the stream in 3 different places The Stream Definition The Input Factory The Input Adapter I personally have always been a fan of enqueing Ctis through the factory. Below is code typical of what I would use to do this On the class itself I do some inheriting public class SimpleInputFactory : ITypedInputAdapterFactory<SimpleInputConfig>, ITypedDeclareAdvanceTimeProperties<SimpleInputConfig> And then I implement the following function public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.FromTicks(-1)), AdvanceTimePolicy.Adjust); } The configInfo .CtiFrequency property is a value I pass through to define after how many events I want a Cti to be injected and this in turn will flush through the stream of data. I usually pass a value of 1 for this setting. The second parameter determines the CTI timestamp in terms of a delay relative to the events. -1 ticks in the past results in 1 tick in the future, i.e., ahead of the event. The problem with this method though is that if consecutive events have the same StartTime then only one of those events will be enqueued. In this example I use the following to define how I assign the StartTime of my events currEvent.StartTime = (DateTimeOffset)dt.c2; If I go ahead and run my StreamInsight process with this configuration i can see on the output adapter that two events have been removed To see this in a little more depth I can use the StreamInsight Debugger and see what happens internally. What is happening here is that the first event arrives and a Cti is injected with a time of 1 tick after the StartTime of that event (Also the EndTime of the event). The second event arrives and it has a StartTime of before the Cti and even though we specified AdvanceTimePolicy.Adjust on the factory we know that a point event can never be adjusted like this and the event is dropped. The same happens for the third event as well (The second and third events get trumped by the Cti). For a more detailed discussion of why this happens look here http://www.sqlis.com/sqlis/post/AdvanceTimePolicy-and-Point-Event-Streams-In-StreamInsight.aspx We end up with a single event being pushed into the output adapter and our result now makes sense. The next way I tried to solve this problem by changing the value of the second parameter to TimeSpan.Zero Here is how my factory code now looks public AdapterAdvanceTimeSettings DeclareAdvanceTimeProperties<TPayload>(SimpleInputConfig configInfo, EventShape eventShape) { return new AdapterAdvanceTimeSettings( new AdvanceTimeGenerationSettings(configInfo.CtiFrequency, TimeSpan.Zero), AdvanceTimePolicy.Adjust); } What I am doing here is declaring a policy that says inject a Cti together with every event and stamp it with a StartTime that is equal to the start time of the event itself (TimeSpan.Zero). This method has plus points as well as a downside. The upside is that no events will be lost by having the same StartTime as previous events. The Downside is that because the Cti is declared with the StartTime of the event itself then it does not actually flush that particular event because in the StreamInsight algebra, a Cti commits only those events that occurred strictly before them. To flush the events we need a Cti to be enqueued with a greater StartTime than the events themselves. Here is what happened when I ran this configuration As you can see all we got through was the Cti and none of the events. The debugger output shows the stamps on the Cti and the events themselves. Because the Cti issued has the same timestamp (StartTime) as the events then none of the events get flushed. I was nearly there but not quite. Because my stream was bursty it was possible that the next event would not come along for a few seconds and this was far too long for an event to be enqueued and not be flushed to the output adapter. I needed another solution. Two possible solutions crossed my mind although only one of them made sense when I explored it some more. Where multiple events have the same StartTime I could add 1 tick to the first event, two to the second, three to third etc thereby giving them unique StartTime values. Add a timer to manually inject Ctis The problem with the first implementation is that I would be giving the events a new StartTime. This would cause me the following problems If I want to define windows over the stream then some events may not be captured in the right windows and therefore any calculations on those windows I did would be wrong What would happen if we had 10,000 events with the same StartTime? I would enqueue them with StartTime + n ticks. Along comes a genuine event with a StartTime of the very first event + 1 tick. It is now too far in the past as far as my stream is concerned and it would be dropped. Not what I would want to do at all. I decided then to look at the Timer based solution I created a timer on my input adapter that elapsed every 200ms. private Timer tmr; public SimpleInputAdapter(SimpleInputConfig configInfo) { ctx = new SimpleTimeExtractDataContext(configInfo.ConnectionString); this.configInfo = configInfo; tmr = new Timer(200); tmr.Elapsed += new ElapsedEventHandler(t_Elapsed); tmr.Enabled = true; } void t_Elapsed(object sender, ElapsedEventArgs e) { ts = DateTime.Now - dtCtiIssued; if (ts.TotalMilliseconds >= 200 && TimerIssuedCti == false) { EnqueueCtiEvent(System.DateTime.Now.AddTicks(-100)); TimerIssuedCti = true; } }   In the t_Elapsed event handler I find out the difference in time between now and when the last event was processed (dtCtiIssued). I then check to see if that is greater than or equal to 200ms and if the last issuing of a Cti was done by the timer or by a genuine event (TimerIssuedCti). If I didn’t do this check then I would enqueue a Cti every time the timer elapsed which is not something I wanted. If the difference between the two times is greater than or equal to 500ms and the last event enqueued was by a real event then I issue a Cti through the timer to flush the event Queue, otherwise I do nothing. When I enqueue the Ctis into my stream in my ProduceEvents method I also set the values of dtCtiIssued and TimerIssuedCti   currEvent = CreateInsertEvent(); currEvent.StartTime = (DateTimeOffset)dt.c2; TimerIssuedCti = false; dtCtiIssued = currEvent.StartTime; If I go ahead and run this configuration I see the following in my output. As we can see the first Cti gets enqueued as before but then another is enqueued by the timer and because this has a later timestamp it flushes the enqueued events through the engine. Conclusion Hopefully this has shown how the enqueuing of Ctis can have a dramatic effect on the responsiveness of your output in StreamInsight. Understanding the temporal nature of the product is for me one of the most important things you can learn. I have attached my solution for the demos. It is all in one project and testing each variation is a simple matter of commenting and un-commenting the parts in the code we have been dealing with here.

    Read the article

  • StreamInsight and Reactive Framework Challenge

    In his blogpost Roman from the StreamInsight team asked if we could create a Reactive Framework version of what he had done in the post using StreamInsight.  For those who don’t know, the Reactive Framework or Rx to its friends is a library for composing asynchronous and event-based programs using observable collections in the .Net framework.  Yes, there is some overlap between StreamInsight and the Reactive Extensions but StreamInsight has more flexibility and power in its temporal algebra (Windowing, Alteration of event headers) Well here are two alternate ways of doing what Roman did. The first example is a mix of StreamInsight and Rx var rnd = new Random(); var RandomValue = 0; var interval = Observable.Interval(TimeSpan.FromMilliseconds((Int32)rnd.Next(500,3000))) .Select(i => { RandomValue = rnd.Next(300); return RandomValue; }); Server s = Server.Create("Default"); Microsoft.ComplexEventProcessing.Application a = s.CreateApplication("Rx SI Mischung"); var inputStream = interval.ToPointStream(a, evt => PointEvent.CreateInsert( System.DateTime.Now.ToLocalTime(), new { RandomValue = evt}), AdvanceTimeSettings.IncreasingStartTime, "Rx Sample"); var r = from evt in inputStream select new { runningVal = evt.RandomValue }; foreach (var x in r.ToPointEnumerable().Where(e => e.EventKind != EventKind.Cti)) { Console.WriteLine(x.Payload.ToString()); } This next version though uses the Reactive Extensions Only   var rnd = new Random(); var RandomValue = 0; Observable.Interval(TimeSpan.FromMilliseconds((Int32)rnd.Next(500, 3000))) .Select(i => { RandomValue = rnd.Next(300); return RandomValue; }).Subscribe(Console.WriteLine, () => Console.WriteLine("Completed")); Console.ReadKey();   These are very simple examples but both technologies allow us to do a lot more.  The ICEPObservable() design pattern was reintroduced in StreamInsight 1.1 and the more I use it the more I like it.  It is a very useful pattern when wanting to show StreamInsight samples as is the IEnumerable() pattern.

    Read the article

  • ... i just avoid GUID

    - by Tomaz.tsql
    Our partner was explaining to me that they are using GUID as primary key on all the tables. My immediate reaction was - why? and couple of basic doubts were: - since I can read uniqueidentifier, it does not tell me absolutely anything - if I will use my relational table, i sure will use other columns to get the information out - SQL is terrible when setting up clustered index on GUID columns (and hence performance problems) - why not use INT? it will save you space on disk, optimizer will be able...(read more)

    Read the article

  • Webcast: 12.2.4 Advanced Planning Command Center Enhancements

    - by ChristineS-Oracle
    Webcast: 12.2.4 Advanced Planning Command Center Enhancements Date: June 12, 2014 at 11:00 am ET, 10:00 am CT, 9:00 am MT, 8:00 am PT, 8:30 pm, India Time (Mumbai, GMT+05:30) This advisor webcast helps Functional Users and IT Analysts understand the new features introduced in Advanced Planning Command Center (APCC) as part of 12.2.4 release. These include custom hierarchies, custom measures, additional measures like projected on hand etc. Other new features include new reports like Build Plan, Order Details. It also includes new integration capabilities between APCC and DRP and support for Trade Planning in APCC. Topics will include: New Feature Introduction Feature Overview and Setup Steps Implementation Tips & Best Practices Details & Registration: Doc ID 1670447.1

    Read the article

< Previous Page | 6 7 8 9 10 11 12 13 14 15 16 17  | Next Page >