Search Results

Search found 1047 results on 42 pages for 'locking'.

Page 3/42 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • Optimistic non-locking copy of InnoDB .frm files

    - by jothir
    MySQL Enterprise Backup(MEB) does hot backup of innodb data and log files. Till MEB 3.6.1, the user backs up the only innodb tables in a 3 step process: STEP 1. Take backup using --only-innodb option STEP 2. Temporarily make the table read only by executing “FLUSH TABLES WITH READ LOCK” MEB 3.7.0 has an enhancement to innodb file copying. The .frm files gets copied along with the hot backup done for innodb files. I would like to make the blog a little interactive by explaining the feature as answers: 1. What are these .frm files? The files containing the metadata, such as the table definition, of a MySQL table. For backups, the full set of .frm files are always required along with the backup data, to be able to restore tables that are altered or dropped after the backup. 2. Can the .frm files not be copied by MEB itself? --only-innodb-with-frm is the new option introduced in MEB 3.7.1 to do a copy of .frm files without locking the tables during backup operation itself. This is to reduce the pain of manually copying the .frm files. The option is intended for backups where you can ensure that no ALTER TABLE, CREATE TABLE, DROP TABLE, or other DDL statements modify the .frm files for InnoDB tables during the backup operation. 3. How is data consistency ensured? MEB does validation of the .frm files after copying by comparing with the server directory to see if the timestamps of any of the .frm files is greater than the saved system time (check .frm time).  This change in timestamp of the .frm files will show if a table is altered during the process of backup. The total number of frm files in the server directory is also verified against the copied contents. If the number of .frm files is less compared to server directory, it shows that table/tables have been dropped during the process of backup. If the number of .frm files is more compared to server directory, it shows that new table/tables have been created during backup operation. 4. How does MEB handle data inconsistency? MEB copies the .frm files through several iterations,  does the validation and throws a WARNING if there is any inconsistency found in .frm files at the end of backup operation. This means the user is warned of some DDL operations that had occurred during backup operation, and has to manually copy the .frm files or do a backup again. 5. What is the option and explain its usage? The option introduced is --only-innodb-with-frm which does optimistic copy of .frm files without locking. This can be used when the user wants to backup only innodb tables along with .frm files. The option can take one of the 2 values: all | related. --only-innodb-with-frm=all does copy of all .frm files of all innodb tables. --only-innodb-with-frm=related works in conjunction with --include option.This is to allow partial backup of .frm files corresponding to the tables specified in --include. Let me show the usage with example output: ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll --only-innodb-with-frm=all backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrmAll        --only-innodb-with-frm=all backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          =  /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb/  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrmAll/datadir  innodb_data_home_dir             =  /logs/backupWithFrmAll/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrmAll/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451979804504860 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:36:22 mysqlbackup: INFO: Copying log... 120817 15:36:22 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/ibdata1 (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table2.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table3.ibd (Antelope file format). 120817 15:36:23 mysqlbackup: INFO: Copying /mysql/trydb/innodb1/table1.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb/' 120817 15:36:23 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb/ mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:36:25 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrmAll' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrmAll/datadir/innodb1/ table1.frm  table1.ibd  table2.frm  table2.ibd  table3.frm  table3.ibd Here the backup directory contains all the .frm files of all the innodb tables. ./mysqlbackup -uroot --backup-dir=/logs/backupWithFrm --include="innodb1.table3.*" --only-innodb-with-frm=related backup MySQL Enterprise Backup version 3.7.1 [2012/06/05] Copyright (c) 2003, 2012, Oracle and/or its affiliates. All Rights Reserved. INFO: Starting with following command line ... ./mysqlbackup -uroot --backup-dir=/logs/backup371frm        --include=innodb1.table3.* --only-innodb-with-frm=related backup INFO: Got some server configuration information from running server. IMPORTANT: Please check that mysqlbackup run completes successfully.            At the end of a successful 'backup' run mysqlbackup            prints "mysqlbackup completed OK!". --------------------------------------------------------------------                       Server Repository Options: --------------------------------------------------------------------  datadir                          = /mysql/trydb/  innodb_data_home_dir             =    innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /mysql/trydb  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 --------------------------------------------------------------------                       Backup Config Options: --------------------------------------------------------------------  datadir                          =  /logs/backupWithFrm/datadir  innodb_data_home_dir             =  /logs/backupWithFrm/datadir  innodb_data_file_path            =  ibdata1:10M:autoextend  innodb_log_group_home_dir        =  /logs/backupWithFrm/datadir  innodb_log_files_in_group        =  2  innodb_log_file_size             =  5242880 mysqlbackup: INFO: Unique generated backup id for this is 13451973458118162 mysqlbackup: INFO: Uses posix_fadvise() for performance optimization. mysqlbackup: INFO: The --include option specified: innodb1.table3.* mysqlbackup: INFO: System tablespace file format is Antelope. mysqlbackup: INFO: Found checkpoint at lsn 1656792. mysqlbackup: INFO: Starting log scan from lsn 1656320. 120817 15:25:47 mysqlbackup: INFO: Copying log... 120817 15:25:47 mysqlbackup: INFO: Log copied, lsn 1656792.          We wait 1 second before starting copying the data files... 120817 15:25:48 mysqlbackup: INFO: Copying /mysql/trydbibdata1 (Antelope file format). 120817 15:25:49 mysqlbackup: INFO: Copying /mysql/trydbinnodb1/table3.ibd (Antelope file format). mysqlbackup: INFO: Opening backup source directory '/mysql/trydb' 120817 15:25:49 mysqlbackup: INFO: Starting to backup .frm files in the subdirectories of /mysql/trydb mysqlbackup: INFO: Copying innodb data and logs during final stage ... mysqlbackup: INFO: A copied database page was modified at 1656792.          (This is the highest lsn found on page)          Scanned log up to lsn 1656792.          Was able to parse the log up to lsn 1656792.          Maximum page number for a log record 0 mysqlbackup: INFO: Copying non-innodb files took 2.000 seconds 120817 15:25:51 mysqlbackup: INFO: Full backup completed! mysqlbackup: INFO: Backup created in directory '/logs/backupWithFrm' -------------------------------------------------------------   Parameters Summary          -------------------------------------------------------------   Start LSN                  : 1656320   End LSN                    : 1656792 ------------------------------------------------------------- mysqlbackup completed OK! bash$ ls /logs/backupWithFrm/datadir/innodb1/ table3.frm table3.ibd Thus the backup directory contains only the .frm file matching the innodb table name specified in --include option. In a nutshell, we present our great new option --only-innodb-with-frm which is a true hot InnoDB-only backup with .frm files, but with an additional check, if any DDL happened during the backup. If a DDL has happened, the DBA can decide if to repeat the backup, or to live with the potential inconsistency. This is the ideal solution for users that have all their "real" data in InnoDB and seldom change their schemas. You may also like: http://dev.mysql.com/doc/mysql-enterprise-backup/3.7/en/backup-partial-options.html   STEP 3. Manually copy the .frm files of innodb tables to the destination directory where backup is stored.

    Read the article

  • PPPD Is Locking the Modem and Not Releasing It

    - by Skid
    Got an issue with PPPD on one of our system, we have a PC that is used to talk to remote sites via a dial up connection, the modem can both connect out to the sites and the sites also dial back in. Currently I'm having an issue where some times a site ether dials in or we dial out, and it connects, but then blocks the modem and throws and error to kern.log. Aug 26 14:23:57 TM-SCADA kernel: [191233.503745] INFO: task pppd:8142 blocked for more than 120 seconds. Aug 26 14:23:57 TM-SCADA kernel: [191233.503750] "echo 0 > /proc/sys/kernel/hung_task_timeout_secs" disables this message. Aug 26 14:23:57 TM-SCADA kernel: [191233.503753] pppd D ffffffff8180cb40 0 8142 1 0x00000000 Aug 26 14:23:57 TM-SCADA kernel: [191233.503759] ffff8800ac1f5dc8 0000000000000086 ffff8800ac1f5fd8 00000000000137c0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503765] ffff8800ac1f4010 00000000000137c0 00000000000137c0 00000000000137c0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503770] ffff8800ac1f5fd8 00000000000137c0 ffffffff81c13020 ffff880135df5b80 Aug 26 14:23:57 TM-SCADA kernel: [191233.503775] Call Trace: Aug 26 14:23:57 TM-SCADA kernel: [191233.503784] [<ffffffff8166ba29>] schedule+0x29/0x70 Aug 26 14:23:57 TM-SCADA kernel: [191233.503790] [<ffffffff813db005>] tty_ldisc_ref_wait+0x65/0xb0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503796] [<ffffffff813f3061>] ? uart_ioctl+0xd1/0x1c0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503801] [<ffffffff81076960>] ? wake_up_bit+0x40/0x40 Aug 26 14:23:57 TM-SCADA kernel: [191233.503806] [<ffffffff813d3fa0>] tty_ioctl+0x2c0/0x9a0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503810] [<ffffffff811d0549>] ? fcntl_setlk+0x69/0x200 Aug 26 14:23:57 TM-SCADA kernel: [191233.503815] [<ffffffff81195f79>] do_vfs_ioctl+0x99/0x330 Aug 26 14:23:57 TM-SCADA kernel: [191233.503820] [<ffffffff81195212>] ? do_fcntl+0x232/0x410 Aug 26 14:23:57 TM-SCADA kernel: [191233.503823] [<ffffffff811962b1>] sys_ioctl+0xa1/0xb0 Aug 26 14:23:57 TM-SCADA kernel: [191233.503829] [<ffffffff81674e69>] system_call_fastpath+0x16/0x1b The syslog trace stops at "Serial connection established". Aug 28 06:00:03 TM-SCADA pppd[10358]: pppd 2.4.5 started by root, uid 0 Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (NO CARRIER) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (NO DIALTONE) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (ERROR) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (NO ANSWER) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (BUSY) Aug 28 06:00:04 TM-SCADA chat[10360]: abort on (Username/Password Incorrect) Aug 28 06:00:04 TM-SCADA chat[10360]: send (atz^M) Aug 28 06:00:04 TM-SCADA chat[10360]: expect (OK) Aug 28 06:00:04 TM-SCADA chat[10360]: atz^M^M Aug 28 06:00:04 TM-SCADA chat[10360]: OK Aug 28 06:00:04 TM-SCADA chat[10360]: -- got it Aug 28 06:00:04 TM-SCADA chat[10360]: send (atx0^M) Aug 28 06:00:04 TM-SCADA chat[10360]: expect (OK) Aug 28 06:00:04 TM-SCADA chat[10360]: ^M Aug 28 06:00:04 TM-SCADA chat[10360]: atx0^M^M Aug 28 06:00:04 TM-SCADA chat[10360]: OK Aug 28 06:00:04 TM-SCADA chat[10360]: -- got it Aug 28 06:00:04 TM-SCADA chat[10360]: send (atdt0123456789^M) Aug 28 06:00:04 TM-SCADA chat[10360]: expect (CONNECT/ARQ) Aug 28 06:00:04 TM-SCADA chat[10360]: ^M Aug 28 06:00:30 TM-SCADA chat[10360]: atdt0123456789^M^M Aug 28 06:00:30 TM-SCADA chat[10360]: CONNECT/ARQ Aug 28 06:00:30 TM-SCADA chat[10360]: -- got it Aug 28 06:00:30 TM-SCADA pppd[10358]: Serial connection established. I've only found two ways to release the modem in this condition, the first is to turn the modem off and on again, the second is to delete the serial lock file, and then SIGKILL pppd. Now I could write into our software to do the latter if the modem is locked, but I would rather stop it from locking in the first place if at all possible. The reason I put this issue in the askubuntu is because we used to use OpenSuse and never had an issue with it, admittedly that was version 11.2 or earlier so its still and old kernel, but I figured I would ask here first anyway. Any suggestions of places to look would be appreciated.

    Read the article

  • Why avoid pessimistic locking in a version control system?

    - by raven
    Based on a few posts I've read concerning version control, it seems people think pessimistic locking in a version control system is a bad thing. Why? I understand that it prevents one developer from submitting a change while another has the file checked out, but so what? If your code files are so big that you constantly have more than one person working on them at the same time, I submit that you should reorganize your code. Break it up into smaller functional units. Integration of concurrent code changes is a tedious and error-prone process even with the tools a good version control system provides to make it easier. I think it should be avoided if at all possible. So, why is pessimistic locking discouraged?

    Read the article

  • How do I find out which process is locking a file using .NET?

    - by AJ
    I've seen several of answers about using Handle or Process Monitor, but I would like to be able to find out in my own code (C#) which process is locking a file. I have a nasty feeling that I'm going to have to spelunk around in the win32 API, but if anyone has already done this and can put me on the right track, I'd really appreciate the help. Update Links to similar questions How does one figure out what process locked a file using c#? Command line tool Across a Network Locking a USB device Unit test fails with locked file deleting locked file

    Read the article

  • System locking up with suspicious messages about hard disk

    - by Chris Conway
    My system has started behaving strangely, intermittently locking up. I see messages like the following in syslog: Nov 18 22:22:00 claypool kernel: [ 3428.078156] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.078163] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.078167] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.078182] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.078184] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.078188] ata3.00: status: { DRDY } Nov 18 22:22:00 claypool kernel: [ 3428.080887] ata3.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x0 Nov 18 22:22:00 claypool kernel: [ 3428.080890] ata3.00: irq_stat 0x40000000 Nov 18 22:22:00 claypool kernel: [ 3428.080893] sr 2:0:0:0: CDB: Test Unit Ready: 00 00 00 00 00 00 Nov 18 22:22:00 claypool kernel: [ 3428.080905] ata3.00: cmd a0/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 22:22:00 claypool kernel: [ 3428.080906] res 50/00:03:00:00:00/00:00:00:00:00/a0 Emask 0x1 (device error) Nov 18 22:22:00 claypool kernel: [ 3428.080910] ata3.00: status: { DRDY } And then this: Nov 18 23:13:56 claypool kernel: [ 6544.000798] ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 18 23:13:56 claypool kernel: [ 6544.000804] ata1.00: failed command: FLUSH CACHE EXT Nov 18 23:13:56 claypool kernel: [ 6544.000814] ata1.00: cmd ea/00:00:00:00:00/00:00:00:00:00/a0 tag 0 Nov 18 23:13:56 claypool kernel: [ 6544.000815] res 40/00:00:00:4f:c2/00:00:00:00:00/40 Emask 0x4 (timeout) Nov 18 23:13:56 claypool kernel: [ 6544.000819] ata1.00: status: { DRDY } Nov 18 23:13:56 claypool kernel: [ 6544.000825] ata1: hard resetting link Nov 18 23:14:01 claypool kernel: [ 6549.360324] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:06 claypool kernel: [ 6554.008091] ata1: COMRESET failed (errno=-16) Nov 18 23:14:06 claypool kernel: [ 6554.008103] ata1: hard resetting link Nov 18 23:14:11 claypool kernel: [ 6559.372246] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:16 claypool kernel: [ 6564.020228] ata1: COMRESET failed (errno=-16) Nov 18 23:14:16 claypool kernel: [ 6564.020235] ata1: hard resetting link Nov 18 23:14:21 claypool kernel: [ 6569.380109] ata1: link is slow to respond, please be patient (ready=0) Nov 18 23:14:31 claypool kernel: [ 6579.460243] ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 18 23:14:31 claypool kernel: [ 6579.486595] ata1.00: configured for UDMA/133 Nov 18 23:14:31 claypool kernel: [ 6579.486601] ata1.00: retrying FLUSH 0xea Emask 0x4 Nov 18 23:14:31 claypool kernel: [ 6579.486939] ata1.00: device reported invalid CHS sector 0 Nov 18 23:14:31 claypool kernel: [ 6579.486952] ata1: EH complete Nov 18 23:17:01 claypool CRON[3910]: (root) CMD ( cd / && run-parts --report /etc/cron.hourly) Nov 18 23:17:01 claypool CRON[3908]: (CRON) error (grandchild #3910 failed with exit status 1) Nov 18 23:17:01 claypool postfix/sendmail[3925]: fatal: open /etc/postfix/main.cf: No such file or directory Nov 18 23:17:01 claypool CRON[3908]: (root) MAIL (mailed 1 byte of output; but got status 0x004b, #012) Nov 18 23:39:01 claypool CRON[4200]: (root) CMD ( [ -x /usr/lib/php5/maxlifetime ] && [ -d /var/lib/php5 ] && find /var/lib/php5/ -type f -cmin +$(/usr/lib/php5/maxlifetime) -print0 | xargs -n 200 -r -0 rm) There are no messages marked after 23:39. When I next tried to use the machine, it would not return from the screensaver (blank screen), nor switch to another terminal, and I had to hard reboot it. [UPDATE] The output of smartctl is here. I had trouble getting this, because / is being mounted read-only (?!), which prevents most applications from running. Also, it may not be related, but I have the following worrying messages in dmesg: [ 10.084596] k8temp 0000:00:18.3: Temperature readouts might be wrong - check erratum #141 [ 10.098477] i2c i2c-0: nForce2 SMBus adapter at 0x600 [ 10.098483] ACPI: resource nForce2_smbus [io 0x0700-0x073f] conflicts with ACPI region SM00 [??? 0x00000700-0x0000073f flags 0x30] [ 10.098486] ACPI: This conflict may cause random problems and system instability [ 10.098487] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [ 10.098509] i2c i2c-1: nForce2 SMBus adapter at 0x700 [ 10.112570] Linux agpgart interface v0.103 [ 10.155329] atk: Resources not safely usable due to acpi_enforce_resources kernel parameter [ 10.161506] it87: Found IT8712F chip at 0x290, revision 8 [ 10.161517] it87: VID is disabled (pins used for GPIO) [ 10.161527] it87: in3 is VCC (+5V) [ 10.161528] it87: in7 is VCCH (+5V Stand-By) [ 10.161560] ACPI: resource it87 [io 0x0295-0x0296] conflicts with ACPI region ECRE [??? 0x00000290-0x000002af flags 0x45] [ 10.161562] ACPI: This conflict may cause random problems and system instability [ 10.161564] ACPI: If an ACPI driver is available for this device, you should use it instead of the native driver [UPDATE 2] I swapped in a new SATA cable, per Phil's suggestion. The current output of smartctl is here, if it helps. [UPDATE 3] I don't think the cable fixed it. The system hasn't locked up yet, but my media player crashed a few minutes ago and I have the following in the syslog: Nov 20 16:07:17 claypool kernel: [ 2294.400033] ata1: link is slow to respond, please be patient (ready=0) Nov 20 16:07:47 claypool kernel: [ 2324.084581] ata1: COMRESET failed (errno=-16) Nov 20 16:07:47 claypool kernel: [ 2324.084588] ata1: limiting SATA link speed to 1.5 Gbps Nov 20 16:07:47 claypool kernel: [ 2324.084592] ata1: hard resetting link I get the following response from smartctl: $ sudo smartctl -a /dev/sda [sudo] password for chris: sudo: Can't open /var/lib/sudo/chris/0: Read-only file system smartctl 5.40 2010-03-16 r3077 [i686-pc-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net Device: /0:0:0:0 Version: scsiModePageOffset: response length too short, resp_len=47 offset=50 bd_len=46 >> Terminate command early due to bad response to IEC mode page A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.

    Read the article

  • locking on dictionary of structs not working between 2 threads?

    - by Rancur3p1c
    C#, .Net2.0, XP, Zen I have 2 threads accessing a shared dictionary of structures, each thread via an event. At the beginning of the event I lock the dictionary, remove some structures, and exit the lock+event. Yet somehow the 2nd thread|event is finding some of the removed structures. Conceptually I must be doing something wrong for this to be happening? I thought locking was supposed to make it thread safe?

    Read the article

  • T-SQL Tuesday #006: LOB, row-overflow and locking behavior

    - by Michael Zilberstein
    This post is my contribution to T-SQL Tuesday #006 , hosted this time by Michael Coles . Actually this post was born last Thursday when I attended Kalen Delaney's "Deep dive into SQL Server Internals" seminar in Tel-Aviv. I asked question, Kalen didn't have answer at hand, so during a break I created demo in order to check certain behavior. Demo goes later in this post but first small teaser. I have MyTable table with 10 rows. I take 2 rows that reside on different pages. In first session...(read more)

    Read the article

  • Advantages of Locking Laptop Cases

    A laptop is meant to be carried around from one place to another and should be never let out of sight when being away from home but this is unfortunately not always possible. "Many of us store valu... [Author: Jeremy Mezzi - Computers and Internet - May 29, 2010]

    Read the article

  • Google locking on Ubuntu

    - by user170534
    Problem I'm facing is that Google doesn't respond well timed to connection requests send from any browsers known to Linux. As far as I can tell, this was existent in Mint, which is Ubuntu based. I have no debug or guess about cause but I'm sure there are people with the same problem. ping of terminal is untouched but any other browser keeps unloaded, for example; google loads fine, I search for something. Then I decide to search for something else and ta daa: You gotta wait for 30 seconds for Google server to respond. I tried using google's public DNS without success. Flare the suggestions & ideas!?

    Read the article

  • Ubuntu 12.04 not Locking Encrypted Hard Drive on Log Out

    - by J.L.
    I'm running Ubuntu 12.04 with Gnome 3.4. I have two external hard drives. I encrypted both using Ubuntu's Disk Utility. When I use Nautilus to mount them, I'm asked for my decryption password. Regardless of whether I then click "Forget password immediately" or "Remember password until you logout", though, I find that Ubuntu does not lock the drives when I log out. Rather, when I log back in, they're still mounted. (To be clear, restarting the computer does unmount them so that they require the password on the next log in.) I'm concerned that these drives are remaining unprotected when I log out without restarting my computer. I would be grateful for help understanding whether this is a bug. Thank you!

    Read the article

  • Automatically locking screen without shutting it off

    - by milkandtang
    Hey everyone— I have a home theater PC running Ubuntu 11.10, outputting over HDMI (for audio and video). I'm having an issue: I'd like the screen to lock automatically (when video is not playing, of course) but do not want the screen to turn off automatically, because that kills audio. I can manually lock the screen, of course, but it appears that if you set the "Turn off screen" setting to "never", the screen will never lock, no matter what the "lock screen" timeout is set to. Is there a way to do what I'm asking, or will I have to install xscreensaver?

    Read the article

  • CPU heating up too much and locking the notebook [12.10 32bits]

    - by Lucas Coutinho
    I have an AMD A6-3400m. When installing ubuntu 12.10 the CPU gets too hot. When I go to see the cpu usage not see anything unusual. I think she is working at or above the specifications of the cpu. I can not finish the installation because the notebook warms both the system disarms and the cooler is maximum. Do not know if it's just me but I never had a good experience using Linux on AMD. When I used Intel these problems did not happen. What is happening can? Ps: The laptop works perfectly in Windows 7 Home Premium 64. No crashes, no overheat, simply works.

    Read the article

  • How do I prevent the Sleep button from locking the screen

    - by elmicha
    My keyboard has a Sleep button. I defined a shortcut in System Settings, Keyboard Shortcuts own settings (or similar), so that the Sleep buttons runs a script. That works. But since my upgrade to Oneiric, something also locks the screen (in the same way the screen is locked when I press Ctrl+Alt+L). Can I disable that behaviour? What's the name of that lock screen? I tried hiding gnome-screensaver and /etc/acpi/ and I looked in gconf-editor /apps/gnome-power-manager/buttons. I didn't find anything related in dconf-editor.

    Read the article

  • Skype locking up, and microphone "lagging"

    - by Svendbenno
    Hi. I've always had this problems with skype and pulseaudio on ubuntu. Whenever i start up skype, i have to call someone, then hang up 4-5 times, before the other person can hear my voice. When i, or the other person hang up, skype tends to lock up. I can't kill it with "killall skype" or a logout, so i have to restart my computer. Have anyone else encountered this problem, and if so solved this? I'm using 10.10 btw.

    Read the article

  • Method for SharePoint list/item locking across processes/machines?

    - by Steve
    In general, is there a decent way in SharePoint to control race conditions due to two processes or even two machines in the farm operating on the same list or list item at the same time? That is, is there any mechanism either built in or that can be fabricated via the Object Model for doing cross-process or cross-machine locking of individual list items? I want to write a timer job that performs a bunch of manipulations on a list. This list is written to by the SharePoint UI and then read by the UI. I want to be able to make sure that the UI doesn't either write to or read from the list when it is in an inconsistent state due to the timer job being in the middle of a manipulation. Is there any way to do this? Also, I want to allow for multiple instances of the timer job to run simultaneously. This, again, will require a lock to be sure that the two jobs don't attempt to operate on the same list/item at the same time. TIA for any help!

    Read the article

  • .NET 3.5 C# does not offer what I need for locking: Count async saves until 0 again.

    - by Frank Michael Kraft
    I have some records, that I want to save to database asynchronously. I organize them into batches, then send them. As time passes, the batches are processed. In the meanwhile the user can work on. There are some critical operations, that I want to lock him out from, while any save batch is still running asynchronously. The save is done using a TableServiceContext and method .BeginSave() - but I think this should be irrelevant. What I want to do is whenever an async save is started, increase a lock count, and when it completes, decrease the lock count so that it will be zero as soon as all have finished. I want to lock out the critical operation as long as the count is not zero. Furthermore I want to qualify the lock - by business object - for example. I did not find a .NET 3.5 c# locking method, that does fulfil this requirement. A semaphore does not contain a method to check, if the count is 0. Otherwise a semaphore with unlimited max count would do.

    Read the article

  • How to share a dictionary between multiple processes in python without locking

    - by RandomVector
    I need to share a huge dictionary (around 1 gb in size) between multiple processs, however since all processes will always read from it. I dont need locking. Is there any way to share a dictionary without locking? The multiprocessing module in python provides an Array class which allows sharing without locking by setting lock=false however There is no such option for Dictionary provided by manager in multiprocessing module.

    Read the article

  • How to prevent nginx from locking files on mounted samba partition in Centos 6

    - by Bruce Kirkpatrick
    I'm using nginx 1.3.8 inside a centos 6.3 virtualbox 4.2.4 virtual machine. The system is running the latest software available via yum update. The host OS is windows 7. The site files nginx is serving are on mounted samba partition, which is a folder on the host Windows system. I.e., inside linux, nginx paths are referring to /home/vhosts and this is mounted from D:\vhosts\ on windows. The samba partition is mounted as root with 777 privileges. /etc/fstab looks like this, but with real ip, username, password: //hostip/vhosts /home/vhosts cifs username=username,password=SECRETPASSWORD,uid=root,gid=root,file_mode=0777,dir_mode=0777,rw,_netdev 0 0 I.e. linux/nginx reads from the windows share, and not the opposite. in /etc/samba/smb.conf, I have tried to disable all samba locking features, but it seems to have no effect even after rebooting the virtual machine. locking=no share modes=no oplocks = no level2 oplocks = no kernel oplocks =no I'm receiving "Access is denied" errors in Windows or linux when attempting to overwrite the javascript file in windows that has been accessed at least once with nginx. If I run "service nginx reload", the lock is removed and I'm able to save the file. That's why I think it is nginx causing the lock. The same problem occurs with directories. However, that may be a different issue not related to the use of samba. I'm using samba so that I can manage the source code outside of the virtual machine. Also note that after I run "service nginx reload", the file I'm editing is actually automatically deleted from the windows host. SOLVED: I just reviewed my nginx.conf file. It appears the "open_file_cache" feature is what is causing the lock and deleted files. When I set this option to open_file_cache off;, My problem is resolved. I will repost as the answer when it allows me to do so.

    Read the article

  • bash and flock (file lock) - Doesn't seem to be locking....

    - by Rory
    I am playing with using flock, a bash command for file locks to prevent 2 different instances of the code from running more than once. I am using this testing code: ( ( flock -x 200 ; sleep 10 ; echo "original finished" ; ) 200>./test.lock ) & ( sleep 2 ; ( flock -x -w 2 200 ; echo "a finished" ) 200>./test.lock ) & I am running 2 subshells (backgrounded). The (flock NUM; ...) NUM>FILE syntax is from flock's man page. I expect that the first subshell will get an exclusive lock on test.lock, then wait 10 seconds, then print "original finished", all the time holding the lock. The second subshell will start at more or less the same time, wait 2 seconds, then try to get a lock on test.lock, but timeout after 2 seconds. If it gets a lock, then it'll print "a finished". If it doesn't get the lock, that subshell should stop, and nothing should be printed. Since the first subshell is waiting longer, it will keep the lock for 10 seconds, so the second subshell should not get the lock, and shouldn't finish. i.e. one should see "original finished" printed and not both. What actually happens is that "a finished" is printed, then "original finished" is printed. This implies that that the second subshell is either (a) not using the same lock as the first subhsell or (b) that it fails to get the lock, but continues to execute or (c) something else. Why don't those locks work?

    Read the article

  • Xen domain migration locking problem

    - by brodie
    I am trying to live migrate a VM (domain) between two Xen servers. I have xen locking (xend-domain-lock = yes) configured with a ocfs2 shared storage between them. This locking is working fine. If I try to start up the VM on the secondary server it refuses to start (which is correct). The problem I am having is when trying to do live migration, it seems like it is trying to remove the lock twice. The first lock it removes is for "domain test", the second is for "migrating-test" which does not exist. Should their be a lock for this "migrating-test" VM? These are the relevant options in the xen config file: (xend-relocation-server yes) (xend-relocation-port 8002) (xend-relocation-address '') (xend-relocation-hosts-allow '') (xend-domain-lock yes) (xend-domain-lock-path /var/lib/xen/lock) This is the section of the log: [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain test [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) SUSPEND shinfo 000c6ceb [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) delta 21ms, dom0 95%, target 0%, sent 57Mb/s, dirtied 173Mb/s 111 pages 4: sent 111, skipped 0, delta 6ms, dom0 100%, target 0%, sent 606Mb/s, dirtied 606Mb/s 111 pages [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Total pages sent= 131295 (0.99x) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) (of which 0 were fixups) [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) All memory is saved [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:474) Save exit rc=0 [2010-06-10 10:45:57 14488] INFO (XendCheckpoint:123) Domain 22 suspended. [2010-06-10 10:45:57 14488] DEBUG (XendDomainInfo:2757) XendDomainInfo.destroy: domid=22 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2227) Destroying device model [2010-06-10 10:45:58 14488] INFO (image:567) migrating-test device model terminated [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2234) Releasing devices [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vif, device = vif/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vkbd, device = vkbd/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = console, device = console/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vbd, device = vbd/51712 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:2247) Removing vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:1137) XendDomainInfo.destroyDevice: deviceClass = vfb, device = vfb/0 [2010-06-10 10:45:58 14488] DEBUG (XendDomainInfo:4054) Releasing lock for domain migrating-test [2010-06-10 10:45:59 14488] ERROR (XendDomainInfo:4070) Failed to remove unmanaged directory /var/lib/xen/lock/b01515ae-9173-03cb-0cb7-06f3dfbede8b.

    Read the article

  • Any way to select without causing locking in MySQL?

    - by Shore
    Query: SELECT COUNT(online.account_id) cnt from online; But online table is also modified by an event, so frequently I can see lock by running show processlist. Is there any grammar in MySQL that can make select statement not causing locks? And I've forgotten to mention above that it's on a MySQL slave database. After I added into my.cnf:transaction-isolation = READ-UNCOMMITTED the slave will meet with error: Error 'Binary logging not possible. Message: Transaction level 'READ-UNCOMMITTED' in InnoDB is not safe for binlog mode 'STATEMENT'' on query So, is there a compatible way to do this?

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >