Search Results

Search found 55281 results on 2212 pages for 'get set'.

Page 657/2212 | < Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >

  • Importing Excel data into SSIS 2008 using Data Conversion Transformation

    Despite its benefits, SQL Server Integration Services Import Export Wizard has a number of limitations, resulting in part from a new set of rules that eliminate implicit data type conversion mechanisms present in Data Transformation Services. This article discusses a method that addresses such limitations, focusing in particular on importing the content of Excel spreadsheets into SQL Server.

    Read the article

  • Commandline Purge in AS11

    - by Dheeraj Kumar
    AS11 - B2B offering consists of numerous features that have been made available via commandline approach. Most of these are supplement to the already available User Interface based approach. One such is purging of runtime data. The commandline purge option enables the users to purge the runtime data, based on various criteria. This is an ANT based command, provides the flexibility to selectively set the criteria to purge the runtime data. Providing the command line option also enables the administrator to purge in bulk, without visiting the B2B UI, which can also be used for automation purpose By default archival is turned on for purge activity. As a pre-requisite, the respective folder needs to be configured in database with the proper permission. When no filename is provided for archived data, the sysdate will be considered for filename. Below are the various options to purge the runtime data Normal 0 Option ANT option   Message state -Dmsgstate   Date range -Dfromdate,  -Dtodate Format : dd/mm/yyyy hh:mm AM/PM Trading partner -Dtp   Direction -Ddirection   Message Type -Dmsgtype   Agreement Name -Dagreement   IdType/ value -Didtype,  -Didvalue   Archive -Darchive True/false By default true Archive file name -Darchivename File name (optional), will be used when archive is set to true. Normal 0 Note: When using -Darchivename the value must be a unique file name. An existing file name used with -Darchivename throws an exception v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 Below are the few of ant commands and various options.   Purge based on date range and message state: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dfromdate="19/12/2009 1:04 AM" -Dtodate="19/12/2009 1:05 AM" -Dmsgstate=MSG_COMPLETE -Darchivename="filename.dmp"  Purge based on direction: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" Normal 0 Purge based on agreement Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dagreement="agreement_name" Normal 0 Purge based on Trading partner Name: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dtp=GlobalChips Normal 0 Purge based on Message State: Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Dmsgstate="MSG_COMPLETE" Normal 0 ant -f ant-b2b-util.xml b2bpurge -Dmode=RT -Ddirection="OUTBOUND" -Dmsgstate="MSG_COMPLETE"

    Read the article

  • Ubuntu lag when minimizing window

    - by hudadiaz
    Ubuntu(or maybe just Unity) lags whenever I minimize a window when there's no other opened window(others are minimized). I faced this problem after I install Ubuntu Tweak 0.7, Conky, OpenJDK 7, IcedTea 7 (if it does matter) There's no problem to: show and hide windows spread window open and close window I have set window minimize animation as Fade 220ms and Animation Time Step as 1 in CCSM but I honestly don't think this is the problem because when there's other opened window, everything is smooth. Ubuntu 12.04, Dell Inspiron 1525, Intel Core 2 Duo @2.00GHZ, 4GB RAM

    Read the article

  • How to deal with cargo-cult programming attitude?

    - by Aivar
    I have some students (in introductory programming course) who see programming language as a set of magic spells, which must be cast in order to achieve some effect (instead of seeing it as a flexible medium for expressing his idea of solution). They tend to copy-paste code from previous similarly sounding assignments without considering the essence of the problem. Can anyone recommend some exercises or analogies to make those students more confident that they can and should understand the structure and meaning of each piece of code they write?

    Read the article

  • TypeScript first impressions

    - by Bertrand Le Roy
    Anders published a video of his new project today, which aims at creating a superset of JavaScript, that compiles down to regular current JavaScript. Anders is a tremendously clever guy, and it always shows in his work. There is much to like in the enterprise (good code completion, refactoring and adoption of the module pattern instead of namespaces to name three), but a few things made me rise an eyebrow. First, there is no mention of CoffeeScript or Dart, but he does talk briefly about Script# and GWT. This is probably because the target audience seems to be the same as the audience for the latter two, i.e. developers who are more comfortable with statically-typed languages such as C# and Java than dynamic languages such as JavaScript. I don’t think he’s aiming at JavaScript developers. Classes and interfaces, although well executed, are not especially appealing. Second, as any code generation tool (and this is true of CoffeeScript as well), you’d better like the generated code. I didn’t, unfortunately. The code that I saw is not the code I would have written. What’s more, I didn’t always find the TypeScript code especially more expressive than what it gets compiled to. I also have a few questions. Is it possible to duck-type interfaces? For example, if I have an IPoint2D interface with x and y coordinates, can I pass any object that has x and y into a function that expects IPoint2D or do I need to necessarily create a class that implements that interface, and new up an instance that explicitly declares its contract? The appeal of dynamic languages is the ability to make objects as you go. This needs to be kept intact. More technical: why are generated variables and functions prefixed with _ rather than the $ that the EcmaScript spec recommends for machine-generated variables? In conclusion, while this is a good contribution to the set of ideas around JavaScript evolution, I don’t expect a lot of adoption outside of the devoted Microsoft developers, but maybe some influence on the language itself. But I’m often wrong. I would certainly not use it because I disagree with the central motivation for doing this: Anders explicitly says he built this because “writing application-scale JavaScript is hard”. I would restate that “writing application-scale JavaScript is hard for people who are used to statically-typed languages”. The community has built a set of good practices over the last few years that do scale quite well, and many people are successfully developing and maintaining impressive applications directly in JavaScript. You can play with TypeScript here: http://www.typescriptlang.org

    Read the article

  • Oracle at ASMC PDI 2012

    - by jeffrey.waterman
    Recently, I had the pleasure of representing Oracle at the American Society of Military Comptrollers National Professional Development Institute (PDI).  The PDI is the premier training event for resource managers in the Department of Defense and US Coast Guard.  Each year they assemble top presenters and key note speakers to convey their experiences and share the upcoming goals and vision for the Defense Department's financial and resource management community.  This year, the common themes were centered around 'auditability' and 'efficiency'.   What is auditability?  There were many definitions/themes tossed around, but to summarize my notes, it boiled down to:- the proper tracking of funds- audit readiness- proper controls- proper documentation There were sessions regarding entire programs focused on the need for auditability.  For example, FIAR: Financial Improvement and Audit Readiness (http://comptroller.defense.gov/fiar/index.html)   The FIAR stresses the "...improve(ment of) the Department's financial processes, controls and information." The entire conference, one set of solutions kept popping into my head around, "how can Oracle's solutions assist the Department of Defense", or any other Federal Agency, improve their financial processes and controls?   One answer came to mind:  Oracle Governance, Risk, and Compliance Management. Commonly referred to as "GRC". Let me summarize the main components around Oracle's GRC solution: GRC Manager: This solution is the central repository for documenting business processes, policies, and established controls.  All identified risks and issues are documented within the repository as well as action plans necessary for mitigation. GRC Controls:  This solution consists of a set of tools which are embedded with your ERP (financial, human resource, supply chain, etc.) applications to detect, prevent, and/or enforce the policies and procedures established by your Agency.  Components of the solution include:- Application Access Control Governor: a robust tool for managing application roles and responsibilities; simplify segregation of duty maintenance- Configuration Controls Governor: complete audit trail for changes made to configurations- Transactions Control Governor: track violations of internal controls; alert management to suspicious activities; be warned when high dollar transactions are occurring on an irregular basis; - Preventative Controls Governor: prevent sensitive information from being viewed by unauthorized parties; enforce field, block, and form change control If you are in the financial or resource management community and are concerned about auditability within your organization I suggest you follow up this post by reading about Oracle's GRC solutions.  www.oracle.com/grc Please feel free to follow up with thought and questions in the comments section below.  Also, if you have a topic you would like addressed in this blog, just drop me a note at [email protected]  or leave the suggestion in the comment section as well. Thank you for reading.

    Read the article

  • Oracle's ZFS Storage Appliance Simulator

    - by Steen Schmidt
    To those of you who has not played with the Oracle's ZFS Storage Appliance, but would like to. You should go an take a look at the Oracle's ZFS Storage Appliance Simulator. You can download this Oracle's ZFS Storage Appliance Simulator here, It will give you a pretty good idear what this unik product is capable of provide for you business.  You can also go and see a demo on how to set the appliance up in Oracle VirtualBox Demo Here. You find Oracle Virtualbox here

    Read the article

  • Severe mysqldump performance degradation using Centos Linux, 8GB PAE and MySQL 5.0.77

    - by Duncan Harris
    We use MySQL 5.0.77 on CentOS 5.5 on VMWare: Linux dev.ic.soschildrensvillages.org.uk 2.6.18-194.11.4.el5PAE #1 SMP Tue Sep 21 05:48:23 EDT 2010 i686 i686 i386 GNU/Linux We have recently upgraded from 4GB RAM to 8GB. When we did this the time of our mysqldump overnight backup jumped from under 10 minutes to over 2 hours. It also caused unresponsiveness on our plone based web site due to database load. The dump is using the optimized mysqldump format and is spooled directly through a socket to another server. Any ideas on what we could do to fix gratefully appreciated. Would a MySQL upgrade help? Anything we can do to MySQL config? Anything we can do to Linux config? Or do we have to add another server or go to 64-bit? We ran a previous (non-virtual) server on 6GB PAE and didn't notice a similar issue. This was on same MySQL version, but Centos 4.4. Server config file: [mysqld] port=3307 socket=/tmp/mysql_live.sock wait_timeout=31536000 interactive_timeout=31536000 datadir=/var/mysql/live/data user=mysql max_connections = 200 max_allowed_packet = 64M table_cache = 2048 binlog_cache_size = 128K max_heap_table_size = 32M sort_buffer_size = 2M join_buffer_size = 2M lower_case_table_names = 1 innodb_data_file_path = ibdata1:10M:autoextend innodb_buffer_pool_size=1G innodb_log_file_size=300M innodb_log_buffer_size=8M innodb_flush_log_at_trx_commit=1 innodb_file_per_table [mysqldump] # Do not buffer the whole result set in memory before writing it to # file. Required for dumping very large tables quick max_allowed_packet = 64M [mysqld_safe] # Increase the amount of open files allowed per process. Warning: Make # sure you have set the global system limit high enough! The high value # is required for a large number of opened tables open-files-limit = 8192 Server variables: mysql> show variables; +---------------------------------+------------------------------------------------------------------+ | Variable_name | Value | +---------------------------------+------------------------------------------------------------------+ | auto_increment_increment | 1 | | auto_increment_offset | 1 | | automatic_sp_privileges | ON | | back_log | 50 | | basedir | /usr/local/mysql-5.0.77-linux-i686-glibc23/ | | binlog_cache_size | 131072 | | bulk_insert_buffer_size | 8388608 | | character_set_client | latin1 | | character_set_connection | latin1 | | character_set_database | latin1 | | character_set_filesystem | binary | | character_set_results | latin1 | | character_set_server | latin1 | | character_set_system | utf8 | | character_sets_dir | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/charsets/ | | collation_connection | latin1_swedish_ci | | collation_database | latin1_swedish_ci | | collation_server | latin1_swedish_ci | | completion_type | 0 | | concurrent_insert | 1 | | connect_timeout | 10 | | datadir | /var/mysql/live/data/ | | date_format | %Y-%m-%d | | datetime_format | %Y-%m-%d %H:%i:%s | | default_week_format | 0 | | delay_key_write | ON | | delayed_insert_limit | 100 | | delayed_insert_timeout | 300 | | delayed_queue_size | 1000 | | div_precision_increment | 4 | | keep_files_on_create | OFF | | engine_condition_pushdown | OFF | | expire_logs_days | 0 | | flush | OFF | | flush_time | 0 | | ft_boolean_syntax | + -><()~*:""&| | | ft_max_word_len | 84 | | ft_min_word_len | 4 | | ft_query_expansion_limit | 20 | | ft_stopword_file | (built-in) | | group_concat_max_len | 1024 | | have_archive | YES | | have_bdb | NO | | have_blackhole_engine | YES | | have_compress | YES | | have_crypt | YES | | have_csv | YES | | have_dynamic_loading | YES | | have_example_engine | NO | | have_federated_engine | YES | | have_geometry | YES | | have_innodb | YES | | have_isam | NO | | have_merge_engine | YES | | have_ndbcluster | DISABLED | | have_openssl | DISABLED | | have_ssl | DISABLED | | have_query_cache | YES | | have_raid | NO | | have_rtree_keys | YES | | have_symlink | YES | | hostname | app.ic.soschildrensvillages.org.uk | | init_connect | | | init_file | | | init_slave | | | innodb_additional_mem_pool_size | 1048576 | | innodb_autoextend_increment | 8 | | innodb_buffer_pool_awe_mem_mb | 0 | | innodb_buffer_pool_size | 1073741824 | | innodb_checksums | ON | | innodb_commit_concurrency | 0 | | innodb_concurrency_tickets | 500 | | innodb_data_file_path | ibdata1:10M:autoextend | | innodb_data_home_dir | | | innodb_adaptive_hash_index | ON | | innodb_doublewrite | ON | | innodb_fast_shutdown | 1 | | innodb_file_io_threads | 4 | | innodb_file_per_table | ON | | innodb_flush_log_at_trx_commit | 1 | | innodb_flush_method | | | innodb_force_recovery | 0 | | innodb_lock_wait_timeout | 50 | | innodb_locks_unsafe_for_binlog | OFF | | innodb_log_arch_dir | | | innodb_log_archive | OFF | | innodb_log_buffer_size | 8388608 | | innodb_log_file_size | 314572800 | | innodb_log_files_in_group | 2 | | innodb_log_group_home_dir | ./ | | innodb_max_dirty_pages_pct | 90 | | innodb_max_purge_lag | 0 | | innodb_mirrored_log_groups | 1 | | innodb_open_files | 300 | | innodb_rollback_on_timeout | OFF | | innodb_support_xa | ON | | innodb_sync_spin_loops | 20 | | innodb_table_locks | ON | | innodb_thread_concurrency | 8 | | innodb_thread_sleep_delay | 10000 | | interactive_timeout | 31536000 | | join_buffer_size | 2097152 | | key_buffer_size | 8384512 | | key_cache_age_threshold | 300 | | key_cache_block_size | 1024 | | key_cache_division_limit | 100 | | language | /usr/local/mysql-5.0.77-linux-i686-glibc23/share/mysql/english/ | | large_files_support | ON | | large_page_size | 0 | | large_pages | OFF | | lc_time_names | en_US | | license | GPL | | local_infile | ON | | locked_in_memory | OFF | | log | OFF | | log_bin | OFF | | log_bin_trust_function_creators | OFF | | log_error | | | log_queries_not_using_indexes | OFF | | log_slave_updates | OFF | | log_slow_queries | OFF | | log_warnings | 1 | | long_query_time | 10 | | low_priority_updates | OFF | | lower_case_file_system | OFF | | lower_case_table_names | 1 | | max_allowed_packet | 67108864 | | max_binlog_cache_size | 4294963200 | | max_binlog_size | 1073741824 | | max_connect_errors | 10 | | max_connections | 200 | | max_delayed_threads | 20 | | max_error_count | 64 | | max_heap_table_size | 33554432 | | max_insert_delayed_threads | 20 | | max_join_size | 18446744073709551615 | | max_length_for_sort_data | 1024 | | max_prepared_stmt_count | 16382 | | max_relay_log_size | 0 | | max_seeks_for_key | 4294967295 | | max_sort_length | 1024 | | max_sp_recursion_depth | 0 | | max_tmp_tables | 32 | | max_user_connections | 0 | | max_write_lock_count | 4294967295 | | multi_range_count | 256 | | myisam_data_pointer_size | 6 | | myisam_max_sort_file_size | 2146435072 | | myisam_recover_options | OFF | | myisam_repair_threads | 1 | | myisam_sort_buffer_size | 8388608 | | myisam_stats_method | nulls_unequal | | ndb_autoincrement_prefetch_sz | 1 | | ndb_force_send | ON | | ndb_use_exact_count | ON | | ndb_use_transactions | ON | | ndb_cache_check_time | 0 | | ndb_connectstring | | | net_buffer_length | 16384 | | net_read_timeout | 30 | | net_retry_count | 10 | | net_write_timeout | 60 | | new | OFF | | old_passwords | OFF | | open_files_limit | 8192 | | optimizer_prune_level | 1 | | optimizer_search_depth | 62 | | pid_file | /var/mysql/live/mysqld.pid | | plugin_dir | | | port | 3307 | | preload_buffer_size | 32768 | | profiling | OFF | | profiling_history_size | 15 | | protocol_version | 10 | | query_alloc_block_size | 8192 | | query_cache_limit | 1048576 | | query_cache_min_res_unit | 4096 | | query_cache_size | 0 | | query_cache_type | ON | | query_cache_wlock_invalidate | OFF | | query_prealloc_size | 8192 | | range_alloc_block_size | 4096 | | read_buffer_size | 131072 | | read_only | OFF | | read_rnd_buffer_size | 262144 | | relay_log | | | relay_log_index | | | relay_log_info_file | relay-log.info | | relay_log_purge | ON | | relay_log_space_limit | 0 | | rpl_recovery_rank | 0 | | secure_auth | OFF | | secure_file_priv | | | server_id | 0 | | skip_external_locking | ON | | skip_networking | OFF | | skip_show_database | OFF | | slave_compressed_protocol | OFF | | slave_load_tmpdir | /tmp/ | | slave_net_timeout | 3600 | | slave_skip_errors | OFF | | slave_transaction_retries | 10 | | slow_launch_time | 2 | | socket | /tmp/mysql_live.sock | | sort_buffer_size | 2097152 | | sql_big_selects | ON | | sql_mode | | | sql_notes | ON | | sql_warnings | OFF | | ssl_ca | | | ssl_capath | | | ssl_cert | | | ssl_cipher | | | ssl_key | | | storage_engine | MyISAM | | sync_binlog | 0 | | sync_frm | ON | | system_time_zone | GMT | | table_cache | 2048 | | table_lock_wait_timeout | 50 | | table_type | MyISAM | | thread_cache_size | 0 | | thread_stack | 196608 | | time_format | %H:%i:%s | | time_zone | SYSTEM | | timed_mutexes | OFF | | tmp_table_size | 33554432 | | tmpdir | /tmp/ | | transaction_alloc_block_size | 8192 | | transaction_prealloc_size | 4096 | | tx_isolation | REPEATABLE-READ | | updatable_views_with_limit | YES | | version | 5.0.77 | | version_comment | MySQL Community Server (GPL) | | version_compile_machine | i686 | | version_compile_os | pc-linux-gnu | | wait_timeout | 31536000 | +---------------------------------+------------------------------------------------------------------+ 237 rows in set (0.00 sec)

    Read the article

  • games and rendered surfaces get slow after having computer on for some time

    - by eyecreate
    I have an i5 2500K and nvidia 9600 GT with proprietary driver. For some reason, this set up on 12.04(and was same on 11.10) seems to work fine on boot up, but after about 30 mins of use, games and other accelerated surfaces start running at about 5fps. This is fixed by a reboot to start it all over again. Is there some way I can get rid of this, as I don't have this problem with other systems with Ubuntu?

    Read the article

  • Benchmarking MySQL Replication with Multi-Threaded Slaves

    - by Mat Keep
    0 0 1 1145 6530 Homework 54 15 7660 14.0 Normal 0 false false false EN-US JA X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:12.0pt; font-family:Cambria; mso-ascii-font-family:Cambria; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Cambria; mso-hansi-theme-font:minor-latin; mso-ansi-language:EN-US;} The objective of this benchmark is to measure the performance improvement achieved when enabling the Multi-Threaded Slave enhancement delivered as a part MySQL 5.6. As the results demonstrate, Multi-Threaded Slaves delivers 5x higher replication performance based on a configuration with 10 databases/schemas. For real-world deployments, higher replication performance directly translates to: · Improved consistency of reads from slaves (i.e. reduced risk of reading "stale" data) · Reduced risk of data loss should the master fail before replicating all events in its binary log (binlog) The multi-threaded slave splits processing between worker threads based on schema, allowing updates to be applied in parallel, rather than sequentially. This delivers benefits to those workloads that isolate application data using databases - e.g. multi-tenant systems deployed in cloud environments. Multi-Threaded Slaves are just one of many enhancements to replication previewed as part of the MySQL 5.6 Development Release, which include: · Global Transaction Identifiers coupled with MySQL utilities for automatic failover / switchover and slave promotion · Crash Safe Slaves and Binlog · Optimized Row Based Replication · Replication Event Checksums · Time Delayed Replication These and many more are discussed in the “MySQL 5.6 Replication: Enabling the Next Generation of Web & Cloud Services” Developer Zone article  Back to the benchmark - details are as follows. Environment The test environment consisted of two Linux servers: · one running the replication master · one running the replication slave. Only the slave was involved in the actual measurements, and was based on the following configuration: - Hardware: Oracle Sun Fire X4170 M2 Server - CPU: 2 sockets, 6 cores with hyper-threading, 2930 MHz. - OS: 64-bit Oracle Enterprise Linux 6.1 - Memory: 48 GB Test Procedure Initial Setup: Two MySQL servers were started on two different hosts, configured as replication master and slave. 10 sysbench schemas were created, each with a single table: CREATE TABLE `sbtest` (    `id` int(10) unsigned NOT NULL AUTO_INCREMENT,    `k` int(10) unsigned NOT NULL DEFAULT '0',    `c` char(120) NOT NULL DEFAULT '',    `pad` char(60) NOT NULL DEFAULT '',    PRIMARY KEY (`id`),    KEY `k` (`k`) ) ENGINE=InnoDB DEFAULT CHARSET=latin1 10,000 rows were inserted in each of the 10 tables, for a total of 100,000 rows. When the inserts had replicated to the slave, the slave threads were stopped. The slave data directory was copied to a backup location and the slave threads position in the master binlog noted. 10 sysbench clients, each configured with 10 threads, were spawned at the same time to generate a random schema load against each of the 10 schemas on the master. Each sysbench client executed 10,000 "update key" statements: UPDATE sbtest set k=k+1 WHERE id = <random row> In total, this generated 100,000 update statements to later replicate during the test itself. Test Methodology: The number of slave workers to test with was configured using: SET GLOBAL slave_parallel_workers=<workers> Then the slave IO thread was started and the test waited for all the update queries to be copied over to the relay log on the slave. The benchmark clock was started and then the slave SQL thread was started. The test waited for the slave SQL thread to finish executing the 100k update queries, doing "select master_pos_wait()". When master_pos_wait() returned, the benchmark clock was stopped and the duration calculated. The calculated duration from the benchmark clock should be close to the time it took for the SQL thread to execute the 100,000 update queries. The 100k queries divided by this duration gave the benchmark metric, reported as Queries Per Second (QPS). Test Reset: The test-reset cycle was implemented as follows: · the slave was stopped · the slave data directory replaced with the previous backup · the slave restarted with the slave threads replication pointer repositioned to the point before the update queries in the binlog. The test could then be repeated with identical set of queries but a different number of slave worker threads, enabling a fair comparison. The Test-Reset cycle was repeated 3 times for 0-24 number of workers and the QPS metric calculated and averaged for each worker count. MySQL Configuration The relevant configuration settings used for MySQL are as follows: binlog-format=STATEMENT relay-log-info-repository=TABLE master-info-repository=TABLE As described in the test procedure, the slave_parallel_workers setting was modified as part of the test logic. The consequence of changing this setting is: 0 worker threads:    - current (i.e. single threaded) sequential mode    - 1 x IO thread and 1 x SQL thread    - SQL thread both reads and executes the events 1 worker thread:    - sequential mode    - 1 x IO thread, 1 x Coordinator SQL thread and 1 x Worker thread    - coordinator reads the event and hands it to the worker who executes 2+ worker threads:    - parallel execution    - 1 x IO thread, 1 x Coordinator SQL thread and 2+ Worker threads    - coordinator reads events and hands them to the workers who execute them Results Figure 1 below shows that Multi-Threaded Slaves deliver ~5x higher replication performance when configured with 10 worker threads, with the load evenly distributed across our 10 x schemas. This result is compared to the current replication implementation which is based on a single SQL thread only (i.e. zero worker threads). Figure 1: 5x Higher Performance with Multi-Threaded Slaves The following figure shows more detailed results, with QPS sampled and reported as the worker threads are incremented. The raw numbers behind this graph are reported in the Appendix section of this post. Figure 2: Detailed Results As the results above show, the configuration does not scale noticably from 5 to 9 worker threads. When configured with 10 worker threads however, scalability increases significantly. The conclusion therefore is that it is desirable to configure the same number of worker threads as schemas. Other conclusions from the results: · Running with 1 worker compared to zero workers just introduces overhead without the benefit of parallel execution. · As expected, having more workers than schemas adds no visible benefit. Aside from what is shown in the results above, testing also demonstrated that the following settings had a very positive effect on slave performance: relay-log-info-repository=TABLE master-info-repository=TABLE For 5+ workers, it was up to 2.3 times as fast to run with TABLE compared to FILE. Conclusion As the results demonstrate, Multi-Threaded Slaves deliver significant performance increases to MySQL replication when handling multiple schemas. This, and the other replication enhancements introduced in MySQL 5.6 are fully available for you to download and evaluate now from the MySQL Developer site (select Development Release tab). You can learn more about MySQL 5.6 from the documentation  Please don’t hesitate to comment on this or other replication blogs with feedback and questions. Appendix – Detailed Results

    Read the article

  • Can I change a linux swap partition to a file system without losing data?

    - by Diogo
    so I am using ubuntu 11.10, just changed from windows to ubuntu, and one of my file systems, I accidentally set it up as a linux swap partition and now I want to acces my files, as I did in windows, but I do not know any secure way to change that partition to a file system without formating it and loosing the data. Because I do not want to loose the data I have there. Does deleting the partition and mounting it again solve the problem?

    Read the article

  • Ubuntu won't display netbook's native resolution

    - by Daniel
    FYI: My Netbook model is HP Mini 210-1004sa, which comes with Intel Graphics Media Accelerator 3150, and has a display 10.1" Active Matrix Colour TFT 1024 x 600. I recently removed Windows 7 Starter from my netbook, and replaced it with Ubuntu 12.10. The problem is the OS doesn't seem to recognise the native display resolution of 1024x600 i.e. the bottom bits of Ubuntu is hidden beneath the screen & the only 2 available resolutions are: the default 1024x768 and 800x600. I've also thought about replacing Ubuntu with Lubuntu or Puppy Linux, as the system does run a bit slow, but I can't, as then I won't be able to access the taskbar and application menu which will be hidden beneath the screen. Only Ubuntu with Unity is currently usable, as the Unity Launcher is visible enough. I was able to define a custom resolution 1024x600 using the Q&A: How set my monitor resolution? but when I set that resolution, there appears a black band at the top of the screen and the desktop area is lowered, with bits of it hidden beneath the screen. I tried leaving it at this new resolution and restarting the system to see if the black band would disappear & the display will fit correctly, but it gets reset to 1024x768 at startup and displays following error: Could not apply the stored configuration for monitors none of the selected modes were compatible with the possible modes: Trying modes for CRTC 63 CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 63: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 63: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1) Trying modes for CRTC 64 CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 0) CRTC 64: trying mode 1024x768@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@60Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 800x600@56Hz with output at 1024x600@60Hz (pass 1) CRTC 64: trying mode 640x480@60Hz with output at 1024x600@60Hz (pass 1)

    Read the article

  • How can I tell if I am out of inotify watches?

    - by Jorge Castro
    I use an application that consumes inotify watches. I've already set fs.inotify.max_user_watches=32768 in /etc/sysctl.conf but last night the application stopped indexing unless I ran it manually, which leads me to suspect I am out of watches. Since I don't know what the trade off is when I increase this number (does it consume more RAM?), I don't know if I should just increase this number, so I'd like to know if there's a way I can tell if it's using all these watches and what the tradeoffs might be for increasing it.

    Read the article

  • If I use my own normal values, should I turn off winding order culling?

    - by Phil
    I've discovered that I managed to program a series of boxes with indexed vertices in such a way that every other triangle (Half of each face) has a backwards winding order. As a result, XNA is culling half of them. However, my Vertex objects contain normal data that I have explicitly set, and I am going to implement my own backface culling shortly to reduce the size of the VertexBuffer. Should I turn off winding order culling and manage it myself, or should I make sure the winding order is consistent and let XNA handle it?

    Read the article

  • SQL Server v.Next (Denali) : Another SSMS bug that should be fixed

    - by AaronBertrand
    Sorry to call this out in a separate post (I talked about a bunch of SSMS Connect items the other day), but Aaron Nelson ( blog | twitter ) jogged my memory today about an issue that has gone unfixed for years: the custom coloring for Registered Servers is neither consistent nor global. For one of my servers, I've chosen a red color to show in the status bar. Let's pretend this is a production server, and I want the red to remind me to use caution. I can set this up by right-clicking a Registered...(read more)

    Read the article

  • How to: Show wait cursor in managed and native code

    - by TechTwaddle
    Someone on the MSDN forum asked about how to show a wait cursor, like when your application is loading or performing some (background) task. It’s pretty simple to show the wait cursor in both managed and native code, and in this post we will see just how. Managed Code (C#) Set Cursor.Current to Cursors.WaitCursor, and call Cursor.Show(). And to come back to normal cursor, set Cursor.Current to Cursors.Default and call Show() again. Below is a button handler for a sample app that I made, (watch the video below) private void button1_Click(object sender, EventArgs e) {     lblProgress.Text = "Downloading ether...";     lblProgress.Update();     Cursor.Current = Cursors.WaitCursor;     Cursor.Show();     //do some processing     for (int i = 0; i < 50; i++)     {         progressBar1.Value = 2 * (i + 1);         Thread.Sleep(100);     }     Cursor.Current = Cursors.Default;     Cursor.Show();     lblProgress.Text = "Download complete.";     lblProgress.Update(); }   Native Code In native code, call SetCursor(LoadCursor(NULL, IDC_WAIT)); to show the wait cursor; and SetCursor(LoadCursor(NULL, IDC_ARROW)); to come back to normal. The same button handler for native version of the app is below, case IDC_BUTTON_DOWNLOAD:     {         HWND temp;         temp = GetDlgItem(hDlg, IDC_STATIC_PROGRESS);         SetWindowText(temp, L"Downloading ether...");         UpdateWindow(temp);         SetCursor(LoadCursor(NULL, IDC_WAIT));         temp = GetDlgItem(hDlg, IDC_PROGRESSBAR);         for (int i=0; i<50; i++)         {             SendMessage(temp, PBM_SETPOS, (i+1)*2, 0);             Sleep(100);         }         SetCursor(LoadCursor(NULL, IDC_ARROW));         temp = GetDlgItem(hDlg, IDC_STATIC_PROGRESS);         SetWindowText(temp, L"Download Complete.");         UpdateWindow(temp);     }     break; Here is a video of the sample app running. First the managed version is deployed and the native version next,

    Read the article

  • error during installation

    - by ratnesh
    at the time of installation ubuntu 10.04.4 from iso inside windows 7 in hp laptop i got this error so plz tell me how can an error occurred:is error executing command command=c:\windows\system32\bcdedit.exe/set {77cbb5c5-2fbf-11e0-820a-f8b4fbbabc97}device partition=f: retval=1 stderr=An error has occurred setting the element data. the request is not supported. stdout=

    Read the article

  • Dependency Checker/ Installer With Java/Ant

    - by jsn
    I need some kind of software to easily roll out code on new servers. I use Apache Ant for builds. However, say I want to set-up a new server fast and my Java program depends on GhostScript, if there any software that can automatically check the computer for it (and then maybe the PATH) and add it if is not there? I have already looked at Maven and Apache Ivy, however, I think these are only for .jar files (from what I saw). Thanks for any help.

    Read the article

  • Is it possible to efficiently store all possible phone numbers in memory?

    - by Spencer K
    Given the standard North American phone number format: (Area Code) Exchange - Subscriber, the set of possible numbers is about 6 billion. However, efficiently breaking down the nodes into the sections listed above would yield less than 12000 distinct nodes that can be arranged in groupings to get all the possible numbers. This seems like a problem already solved. Would it done via a graph or tree?

    Read the article

  • Dell Synaptics touch pad's middle mouse button gets mapped as normal click

    - by Henrik
    How do I make the middle touch pad's button work? xinput --test 11 yields button press 1 button press 1 For pressing both the left and the middle button. I have tried to do xinput set-button-map 11 1 4 2 and so on, but as the --test shows that button 1 is being depressed, then probably the issue is at a lower level than with X11's perception of what mouse buttons I'm pressing (or assigning button-map 11 1 2 3 and clicking the right button in firefox, wouldn't trigger the middle-click on the link)

    Read the article

  • Real World Nuget

    - by JoshReuben
    Why Nuget A higher level of granularity for managing references When you have solutions of many projects that depend on solutions of many projects etc à escape from Solution Hell. Links · Using A GUI (Package Explorer) to build packages - http://docs.nuget.org/docs/creating-packages/using-a-gui-to-build-packages · Creating a Nuspec File - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic2.aspx · consuming a Nuget Package - http://msdn.microsoft.com/en-us/vs2010trainingcourse_aspnetmvcnuget_topic3 · Nuspec reference - http://docs.nuget.org/docs/reference/nuspec-reference · updating packages - http://nuget.codeplex.com/wikipage?title=Updating%20All%20Packages · versioning - http://docs.nuget.org/docs/reference/versioning POC Folder Structure POC Setup Steps · Install package explorer · Source o Create a source solution – configure output directory for projects (Project > Properties > Build > Output Path) · Package o Add assemblies to package from output directory (D&D)- add net folder o File > Export – save .nuspec files and lib contents <?xml version="1.0" encoding="utf-16"?> <package > <metadata> <id>MyPackage</id> <version>1.0.0.3</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <summary /> </metadata> </package> o File > Save – saves .nupkg file · Create Target Solution o In Tools > Options: Configure package source & Add package Select projects: Output from package manager (powershell console) ------- Installing...MyPackage 1.0.0 ------- Added file 'NugetSource.AssemblyA.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyA.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.dll' to folder 'MyPackage.1.0.0\lib'. Added file 'NugetSource.AssemblyB.pdb' to folder 'MyPackage.1.0.0\lib'. Added file 'MyPackage.1.0.0.nupkg' to folder 'MyPackage.1.0.0'. Successfully installed 'MyPackage 1.0.0'. Added reference 'NugetSource.AssemblyA' to project 'AssemblyX' Added reference 'NugetSource.AssemblyB' to project 'AssemblyX' Added file 'packages.config'. Added file 'packages.config' to project 'AssemblyX' Added file 'repositories.config'. Successfully added 'MyPackage 1.0.0' to AssemblyX. ============================== o Packages folder created at solution level o Packages.config file generated in each project: <?xml version="1.0" encoding="utf-8"?> <packages>   <package id="MyPackage" version="1.0.0" targetFramework="net40" /> </packages> A local Packages folder is created for package versions installed: Each folder contains the downloaded .nupkg file and its unpacked contents – eg of dlls that the project references Note: this folder is not checked in UpdatePackages o Configure Package Manager to automatically check for updates o Browse packages - It automatically picked up the updates Update Procedure · Modify source · Change source version in assembly info · Build source · Open last package in package explorer · Increment package version number and re-add assemblies · Save package with new version number and export its definition · In target solution – Tools > Manage Nuget Packages – click on All to trigger refresh , then click on recent packages to see updates · If problematic, delete packages folder Versioning uninstall-package mypackage install-package mypackage –version 1.0.0.3 uninstall-package mypackage install-package mypackage –version 1.0.0.4 Dependencies · <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd"> <metadata> <id>MyDependentPackage</id> <version>1.0.0</version> <title /> <authors>josh-r</authors> <owners /> <requireLicenseAcceptance>false</requireLicenseAcceptance> <description>My package description.</description> <dependencies> <group targetFramework=".NETFramework4.0"> <dependency id="MyPackage" version="1.0.0.4" /> </group> </dependencies> </metadata> </package> Using NuGet without committing packages to source control http://docs.nuget.org/docs/workflows/using-nuget-without-committing-packages Right click on the Solution node in Solution Explorer and select Enable NuGet Package Restore. — Recall that packages folder is not part of solution If you get downloading package ‘Nuget.build’ failed, config proxy to support certificate for https://nuget.org/api/v2/ & allow unrestricted access to packages.nuget.org To test connectivity: get-package –listavailable To test Nuget Package Restore – delete packages folder and open vs as admin. In nugget msbuild: <Import Project="$(SolutionDir)\.nuget\nuget.targets" /> TFSBuild Integration Modify Nuget.Targets file <RestorePackages Condition="  '$(RestorePackages)' == '' "> True </RestorePackages> … <PackageSource Include="\\IL-CV-004-W7D\Packages" /> Add System Environment variable EnableNuGetPackageRestore=true & restart the “visual studio team foundation build service host” service. Important: Ensure Network Service has access to Packages folder Nugetter TFS Build integration Add Nugetter build process templates to TFS source control For Build Controller - Specify location of custom assemblies Generate .nuspec file from Package Explorer: File > Export Edit the file elements – remove path info from src and target attributes <?xml version="1.0" encoding="utf-16"?> <package xmlns="http://schemas.microsoft.com/packaging/2012/06/nuspec.xsd">     <metadata>         <id>Common</id>         <version>1.0.0</version>         <title />         <authors>josh-r</authors>         <owners />         <requireLicenseAcceptance>false</requireLicenseAcceptance>         <description>My package description.</description>         <dependencies>             <group targetFramework=".NETFramework3.5" />         </dependencies>     </metadata>     <files>         <file src="CommonTypes.dll" target="CommonTypes.dll" />         <file src="CommonTypes.pdb" target="CommonTypes.pdb" /> … Add .nuspec file to solution so that it is available for build: Dev\NovaNuget\Common\NuSpec\common.1.0.0.nuspec Add a Build Process Definition based on the Nugetter build process template: Configure the build process – specify: · .sln to build · Base path (output directory) · Nuget.exe file path · .nuspec file path Copy DLLs to a binary folder 1) Set copy local for an assembly reference to false 2)  MSBuild Copy Task – modify .csproj file: http://msdn.microsoft.com/en-us/library/3e54c37h.aspx <ItemGroup>     <MySourceFiles Include="$(MSBuildProjectDirectory)\..\SourceAssemblies\**\*.*" />   </ItemGroup>     <Target Name="BeforeBuild">     <Copy SourceFiles="@(MySourceFiles)" DestinationFolder="bin\debug\SourceAssemblies" />   </Target> 3) Set Probing assembly search path from app.config - http://msdn.microsoft.com/en-us/library/823z9h8w(v=vs.80).aspx -                 <?xml version="1.0" encoding="utf-8" ?> <configuration>   <runtime>     <assemblyBinding xmlns="urn:schemas-microsoft-com:asm.v1">       <probing privatePath="SourceAssemblies"/>     </assemblyBinding>   </runtime> </configuration> Forcing 'copy local = false' The following generic powershell script was added to the packages install.ps1: param($installPath, $toolsPath, $package, $project) if( $project.Object.Project.Name -ne "CopyPackages") { $asms = $package.AssemblyReferences | %{$_.Name} foreach ($reference in $project.Object.References) { if ($asms -contains $reference.Name + ".dll") { $reference.CopyLocal = $false; } } } An empty project named "CopyPackages" was added to the solution - it references all the packages and is the only one set to CopyLocal="true". No MSBuild knowledge required.

    Read the article

  • Motion - can't get streaming working from a webcam

    - by Emmanuel Brunet
    I'm trying to record a video stream from my Tenvis IP camera with motion 3.2.12 on Debian 7.5. I used the standard debian package with sudo apt-get install motion Assume my DNS IP cam is webcam, user : admin, password : password /etc/motion/motion.conf Bellow are my configuration file settings : netcam_url http://webcam/videostream.cgi netcam_userpass admin:password target_dir /media/videos/log/motion # The mini-http server listens to this port for requests (default: 0 = disabled) webcam_port 1234 ffmpeg_cap_new on ffmpeg_video_codec mpeg4 output_motion off snapshot_interval 0 # Quality of the jpeg (in percent) images produced (default: 50) webcam_quality 50 # Output frames at 1 fps when no motion is detected and increase to the # rate given by webcam_maxrate when motion is detected (default: off) webcam_motion on # Maximum framerate for webcam streams (default: 1) webcam_maxrate 15 # Restrict webcam connections to localhost only (default: on) webcam_localhost on # Limits the number of images per connection (default: 0 = unlimited) # Number can be defined by multiplying actual webcam rate by desired number of seconds # Actual webcam rate is the smallest of the numbers framerate and webcam_maxrate webcam_limit 0 control_port 8080 control_authentication admin:password Issue #1 when I try display http:/localhost:1234 the browser looks frozen, no HTTP 404 received but it stills waiting for data it seems .. Issue #2 in the output directory motion writes a lot of jpeg snapshots althought I just want to have several video sequenced files. Log I run motion in interactive mode in a terminal, here is the ouput root@mercure:/etc/motion# motion -c motion-1.0.conf [0] Processing thread 0 - config file motion-1.0.conf [0] Motion 3.2.12 Started [0] ffmpeg LIBAVCODEC_BUILD 3482368 LIBAVFORMAT_BUILD 3478785 [0] Thread 1 is from motion-1.0.conf [0] motion-httpd/3.2.12 running, accepting connections [0] motion-httpd: waiting for data on port TCP 8080 [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Operation now in progress [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165303.avi]: Operation now in progress [1] File of type 1 saved to: ~/tmp/motion/01-20140603165303-01.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Resource temporarily unavailable [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165329.avi]: Resource temporarily unavailable [1] File of type 1 saved to: ~/tmp/motion/01-20140603165329-00.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 [1] avcodec_open - could not open codec: Connection reset by peer [1] ffopen_open error creating (new) file [~/tmp/motion/01-20140603165355.avi]: Connection reset by peer [1] File of type 1 saved to: ~/tmp/motion/01-20140603165355-06.jpg [1] Thread exiting [1] Calling vid_close() from motion_cleanup [1] vid_close: calling netcam_cleanup [0] httpd - Finishing [0] httpd Closing [0] httpd thread exit [1] netcam camera handler: finish set, exiting [0] Motion thread 1 restart [1] Thread 1 started [1] Resizing pre_capture buffer to 1 items [1] Started stream webcam server in port 1234 It doesn't find the codec ... avcodec_open - could not open codec: Operation now in progress I've changed fmpeg_video_codec from mpeg4 to swf the result is the same. When using flv format motion writes a lot of .jpg image but I can't see anything at http://localhost:1234 [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-02.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-03.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-04.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-05.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171035-06.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-00.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-01.jpg [1] File of type 1 saved to: ~/tmp/motion/01-20140603171036-02.jpg Any idea just to get the video stream recoded on my local disk ?

    Read the article

  • Week in Geek: Study finds Men more Likely to Fall for Facebook Scams

    - by Asian Angel
    This week we learned how to “read Blue Screen codes, clean your computer, & get started with scripting”, upgrade or install Mac OS X Lion on a Hackintosh using UniBeast, use Amazon’s barcode scanner to easily buy anything from your phone, had fun with a great set of geeky do-it-yourself projects for pets, got introduced to How-To Geek’s new Google+ account, and more. Photo by mac_filko. Use Amazon’s Barcode Scanner to Easily Buy Anything from Your Phone How To Migrate Windows 7 to a Solid State Drive Follow How-To Geek on Google+

    Read the article

< Previous Page | 653 654 655 656 657 658 659 660 661 662 663 664  | Next Page >