Search Results

Search found 607 results on 25 pages for 'tb selleo'.

Page 6/25 | < Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >

  • Do I need to be worried about these SMART drive temperatures?

    - by Steve Lorimer
    I have 5 hard drives in a machine sitting in a cupboard. /dev/sda is a 500GB Seagate drive, and is the boot disk. /dev/sd{b,c,d,e} are 2TB drives in a raid6 configuration. smartctl is showing significantly higher temperatures (like ~140 degrees celsius) on the raid drives than the boot drive. Do I need to be worried? /dev/sdb and /dev/sde are new Western Digital Black drives (new=1 week) /dev/sdc and /dev/sdd are 5 year old Hitachi drives /dev/sda [SAT], Temperature_Celsius changed from 40 to 39 /dev/sdc [SAT], Temperature_Celsius changed from 142 to 146 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 142 /dev/sdd [SAT], Temperature_Celsius changed from 142 to 146 /dev/sda [SAT], Airflow_Temperature_Cel changed from 61 to 62 /dev/sda [SAT], Temperature_Celsius changed from 39 to 38 /dev/sde [SAT], Temperature_Celsius changed from 107 to 108 /dev/sdb [SAT], Temperature_Celsius changed from 108 to 109 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sdc [SAT], Temperature_Celsius changed from 146 to 150 /dev/sda [SAT], Airflow_Temperature_Cel changed from 62 to 61 /dev/sda [SAT], Temperature_Celsius changed from 38 to 39 Update: Adding detailed drive information as per request: /dev/sda =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Pipeline HD 5900.2 Device Model: ST3500312CS Serial Number: 5VV47HXA LU WWN Device Id: 5 000c50 02aad5ad6 Firmware Version: SC13 User Capacity: 500,107,862,016 bytes [500 GB] Sector Size: 512 bytes logical/physical Rotation Rate: 5900 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 1.5 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdb =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1398726 LU WWN Device Id: 5 0014ee 003b8bd25 Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdc =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WSTUD LU WWN Device Id: 5 000cca 369cc9f5d Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sdd =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Hitachi Deskstar 7K3000 Device Model: Hitachi HDS723020BLA642 Serial Number: MN1220F30WST4D LU WWN Device Id: 5 000cca 369cc9f48 Firmware Version: MN6OA580 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled /dev/sde =========================== smartctl 6.0 2012-10-10 r3643 [x86_64-linux-3.9.10-100.fc17.x86_64] (local build) Copyright (C) 2002-12, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD2003FZEX-00Z4SA0 Serial Number: WD-WMC1F1483782 LU WWN Device Id: 5 0014ee 3002d235c Firmware Version: 01.01A01 User Capacity: 2,000,398,934,016 bytes [2.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 7200 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 1.5 Gb/s) Local Time is: Tue Jun 3 10:54:11 2014 EST SMART support is: Available - device has SMART capability. SMART support is: Enabled

    Read the article

  • FAQ: GridView Calculation with JavaScript

    - by Vincent Maverick Durano
    In my previous post I wrote a simple demo on how to Calculate Totals in GridView and Display it in the Footer. Basically what it does is it calculates the total amount by typing into the TextBox and display the grand total in the footer of the GridView and basically it was a server side implemenation.  Many users in the forums are asking how to do the same thing without postbacks and how to calculate both amount and total amount together. In this post I will demonstrate how to do this using JavaScript. To get started let's go ahead and set up the form. Just for the simplicity of this demo I just set up the form like this:   <asp:gridview ID="GridView1" runat="server" ShowFooter="true" AutoGenerateColumns="false"> <Columns> <asp:BoundField DataField="RowNumber" HeaderText="Row Number" /> <asp:BoundField DataField="Description" HeaderText="Item Description" /> <asp:TemplateField HeaderText="Item Price"> <ItemTemplate> <asp:Label ID="LBLPrice" runat="server" Text='<%# Eval("Price") %>'></asp:Label> </ItemTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Quantity"> <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server"></asp:TextBox> </ItemTemplate> <FooterTemplate> <b>Total Amount:</b> </FooterTemplate> </asp:TemplateField> <asp:TemplateField HeaderText="Sub-Total"> <ItemTemplate> <asp:Label ID="LBLSubTotal" runat="server"></asp:Label> </ItemTemplate> <FooterTemplate> <asp:Label ID="LBLTotal" runat="server" ForeColor="Green"></asp:Label> </FooterTemplate> </asp:TemplateField> </Columns> </asp:gridview>   As you can see there's no fancy about the mark up above. It just a standard GridView with BoundFields and TemplateFields on it. Now just for the purpose of this demo I just use a dummy data for populating the GridView. Here's the code below:   public partial class GridCalculation : System.Web.UI.Page { private void BindDummyDataToGrid() { DataTable dt = new DataTable(); DataRow dr = null; dt.Columns.Add(new DataColumn("RowNumber", typeof(string))); dt.Columns.Add(new DataColumn("Description", typeof(string))); dt.Columns.Add(new DataColumn("Price", typeof(string))); dr = dt.NewRow(); dr["RowNumber"] = 1; dr["Description"] = "Nike"; dr["Price"] = "1000"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 2; dr["Description"] = "Converse"; dr["Price"] = "800"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 3; dr["Description"] = "Adidas"; dr["Price"] = "500"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 4; dr["Description"] = "Reebok"; dr["Price"] = "750"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 5; dr["Description"] = "Vans"; dr["Price"] = "1100"; dt.Rows.Add(dr); dr = dt.NewRow(); dr["RowNumber"] = 6; dr["Description"] = "Fila"; dr["Price"] = "200"; dt.Rows.Add(dr); //Bind the Gridview GridView1.DataSource = dt; GridView1.DataBind(); } protected void Page_Load(object sender, EventArgs e) { if (!IsPostBack) { BindDummyDataToGrid(); } } }   Now try to run the page. The output should look something like below: The Client-Side Calculation Here's the code for the GridView calculation:   <script type="text/javascript"> function CalculateTotals() { var gv = document.getElementById("<%= GridView1.ClientID %>"); var tb = gv.getElementsByTagName("input"); var lb = gv.getElementsByTagName("span"); var sub = 0; var total = 0; var indexQ = 1; var indexP = 0; for (var i = 0; i < tb.length; i++) { if (tb[i].type == "text") { sub = parseFloat(lb[indexP].innerHTML) * parseFloat(tb[i].value); if (isNaN(sub)) { lb[i + indexQ].innerHTML = ""; sub = 0; } else { lb[i + indexQ].innerHTML = sub; } indexQ++; indexP = indexP + 2; total += parseFloat(sub); } } lb[lb.length -1].innerHTML = total; } </script>   The code above calculates the sub-total by multiplying the price and the quantity and at the same time calculates the total amount  by adding the sub-total values. Now you can simply call the JavaScript function above like this:   <ItemTemplate> <asp:TextBox ID="TXTQty" runat="server" onkeyup="CalculateTotals();"></asp:TextBox> </ItemTemplate>   Running the code above will display something like below: That's it! I hope someone find this post useful! Technorati Tags: ASP.NET,JavaScript,GridView,TipsTricks

    Read the article

  • LSI 9285-8e and Supermicro SC837E26-RJBOD1 duplicate enclosure ID and slot numbers

    - by Andy Shinn
    I am working with 2 x Supermicro SC837E26-RJBOD1 chassis connected to a single LSI 9285-8e card in a Supermicro 1U host. There are 28 drives in each chassis for a total of 56 drives in 28 RAID1 mirrors. The problem I am running in to is that there are duplicate slots for the 2 chassis (the slots list twice and only go from 0 to 27). All the drives also show the same enclosure ID (ID 36). However, MegaCLI -encinfo lists the 2 enclosures correctly (ID 36 and ID 65). My question is, why would this happen? Is there an option I am missing to use 2 enclosures effectively? This is blocking me rebuilding a drive that failed in slot 11 since I can only specify enclosure and slot as parameters to replace a drive. When I do this, it picks the wrong slot 11 (device ID 46 instead of device ID 19). Adapter #1 is the LSI 9285-8e, adapter #0 (which I removed due to space limitations) is the onboard LSI. Adapter information: Adapter #1 ============================================================================== Versions ================ Product Name : LSI MegaRAID SAS 9285-8e Serial No : SV12704804 FW Package Build: 23.1.1-0004 Mfg. Data ================ Mfg. Date : 06/30/11 Rework Date : 00/00/00 Revision No : 00A Battery FRU : N/A Image Versions in Flash: ================ BIOS Version : 5.25.00_4.11.05.00_0x05040000 WebBIOS Version : 6.1-20-e_20-Rel Preboot CLI Version: 05.01-04:#%00001 FW Version : 3.140.15-1320 NVDATA Version : 2.1106.03-0051 Boot Block Version : 2.04.00.00-0001 BOOT Version : 06.253.57.219 Pending Images in Flash ================ None PCI Info ================ Vendor Id : 1000 Device Id : 005b SubVendorId : 1000 SubDeviceId : 9285 Host Interface : PCIE ChipRevision : B0 Number of Frontend Port: 0 Device Interface : PCIE Number of Backend Port: 8 Port : Address 0 5003048000ee8e7f 1 5003048000ee8a7f 2 0000000000000000 3 0000000000000000 4 0000000000000000 5 0000000000000000 6 0000000000000000 7 0000000000000000 HW Configuration ================ SAS Address : 500605b0038f9210 BBU : Present Alarm : Present NVRAM : Present Serial Debugger : Present Memory : Present Flash : Present Memory Size : 1024MB TPM : Absent On board Expander: Absent Upgrade Key : Absent Temperature sensor for ROC : Present Temperature sensor for controller : Absent ROC temperature : 70 degree Celcius Settings ================ Current Time : 18:24:36 3/13, 2012 Predictive Fail Poll Interval : 300sec Interrupt Throttle Active Count : 16 Interrupt Throttle Completion : 50us Rebuild Rate : 30% PR Rate : 30% BGI Rate : 30% Check Consistency Rate : 30% Reconstruction Rate : 30% Cache Flush Interval : 4s Max Drives to Spinup at One Time : 2 Delay Among Spinup Groups : 12s Physical Drive Coercion Mode : Disabled Cluster Mode : Disabled Alarm : Enabled Auto Rebuild : Enabled Battery Warning : Enabled Ecc Bucket Size : 15 Ecc Bucket Leak Rate : 1440 Minutes Restore HotSpare on Insertion : Disabled Expose Enclosure Devices : Enabled Maintain PD Fail History : Enabled Host Request Reordering : Enabled Auto Detect BackPlane Enabled : SGPIO/i2c SEP Load Balance Mode : Auto Use FDE Only : No Security Key Assigned : No Security Key Failed : No Security Key Not Backedup : No Default LD PowerSave Policy : Controller Defined Maximum number of direct attached drives to spin up in 1 min : 10 Any Offline VD Cache Preserved : No Allow Boot with Preserved Cache : No Disable Online Controller Reset : No PFK in NVRAM : No Use disk activity for locate : No Capabilities ================ RAID Level Supported : RAID0, RAID1, RAID5, RAID6, RAID00, RAID10, RAID50, RAID60, PRL 11, PRL 11 with spanning, SRL 3 supported, PRL11-RLQ0 DDF layout with no span, PRL11-RLQ0 DDF layout with span Supported Drives : SAS, SATA Allowed Mixing: Mix in Enclosure Allowed Mix of SAS/SATA of HDD type in VD Allowed Status ================ ECC Bucket Count : 0 Limitations ================ Max Arms Per VD : 32 Max Spans Per VD : 8 Max Arrays : 128 Max Number of VDs : 64 Max Parallel Commands : 1008 Max SGE Count : 60 Max Data Transfer Size : 8192 sectors Max Strips PerIO : 42 Max LD per array : 16 Min Strip Size : 8 KB Max Strip Size : 1.0 MB Max Configurable CacheCade Size: 0 GB Current Size of CacheCade : 0 GB Current Size of FW Cache : 887 MB Device Present ================ Virtual Drives : 28 Degraded : 0 Offline : 0 Physical Devices : 59 Disks : 56 Critical Disks : 0 Failed Disks : 0 Supported Adapter Operations ================ Rebuild Rate : Yes CC Rate : Yes BGI Rate : Yes Reconstruct Rate : Yes Patrol Read Rate : Yes Alarm Control : Yes Cluster Support : No BBU : No Spanning : Yes Dedicated Hot Spare : Yes Revertible Hot Spares : Yes Foreign Config Import : Yes Self Diagnostic : Yes Allow Mixed Redundancy on Array : No Global Hot Spares : Yes Deny SCSI Passthrough : No Deny SMP Passthrough : No Deny STP Passthrough : No Support Security : No Snapshot Enabled : No Support the OCE without adding drives : Yes Support PFK : Yes Support PI : No Support Boot Time PFK Change : Yes Disable Online PFK Change : No PFK TrailTime Remaining : 0 days 0 hours Support Shield State : Yes Block SSD Write Disk Cache Change: Yes Supported VD Operations ================ Read Policy : Yes Write Policy : Yes IO Policy : Yes Access Policy : Yes Disk Cache Policy : Yes Reconstruction : Yes Deny Locate : No Deny CC : No Allow Ctrl Encryption: No Enable LDBBM : No Support Breakmirror : No Power Savings : Yes Supported PD Operations ================ Force Online : Yes Force Offline : Yes Force Rebuild : Yes Deny Force Failed : No Deny Force Good/Bad : No Deny Missing Replace : No Deny Clear : No Deny Locate : No Support Temperature : Yes Disable Copyback : No Enable JBOD : No Enable Copyback on SMART : No Enable Copyback to SSD on SMART Error : Yes Enable SSD Patrol Read : No PR Correct Unconfigured Areas : Yes Enable Spin Down of UnConfigured Drives : Yes Disable Spin Down of hot spares : No Spin Down time : 30 T10 Power State : Yes Error Counters ================ Memory Correctable Errors : 0 Memory Uncorrectable Errors : 0 Cluster Information ================ Cluster Permitted : No Cluster Active : No Default Settings ================ Phy Polarity : 0 Phy PolaritySplit : 0 Background Rate : 30 Strip Size : 64kB Flush Time : 4 seconds Write Policy : WB Read Policy : Adaptive Cache When BBU Bad : Disabled Cached IO : No SMART Mode : Mode 6 Alarm Disable : Yes Coercion Mode : None ZCR Config : Unknown Dirty LED Shows Drive Activity : No BIOS Continue on Error : No Spin Down Mode : None Allowed Device Type : SAS/SATA Mix Allow Mix in Enclosure : Yes Allow HDD SAS/SATA Mix in VD : Yes Allow SSD SAS/SATA Mix in VD : No Allow HDD/SSD Mix in VD : No Allow SATA in Cluster : No Max Chained Enclosures : 16 Disable Ctrl-R : Yes Enable Web BIOS : Yes Direct PD Mapping : No BIOS Enumerate VDs : Yes Restore Hot Spare on Insertion : No Expose Enclosure Devices : Yes Maintain PD Fail History : Yes Disable Puncturing : No Zero Based Enclosure Enumeration : No PreBoot CLI Enabled : Yes LED Show Drive Activity : Yes Cluster Disable : Yes SAS Disable : No Auto Detect BackPlane Enable : SGPIO/i2c SEP Use FDE Only : No Enable Led Header : No Delay during POST : 0 EnableCrashDump : No Disable Online Controller Reset : No EnableLDBBM : No Un-Certified Hard Disk Drives : Allow Treat Single span R1E as R10 : No Max LD per array : 16 Power Saving option : Don't Auto spin down Configured Drives Max power savings option is not allowed for LDs. Only T10 power conditions are to be used. Default spin down time in minutes: 30 Enable JBOD : No TTY Log In Flash : No Auto Enhanced Import : No BreakMirror RAID Support : No Disable Join Mirror : No Enable Shield State : Yes Time taken to detect CME : 60s Exit Code: 0x00 Enclosure information: # /opt/MegaRAID/MegaCli/MegaCli64 -encinfo -a1 Number of enclosures on adapter 1 -- 3 Enclosure 0: Device ID : 36 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port B Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 65 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11820 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 48 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 1: Device ID : 65 Number of Slots : 28 Number of Power Supplies : 2 Number of Fans : 3 Number of Temperature Sensors : 1 Number of Alarms : 1 Number of SIM Modules : 0 Number of Physical Drives : 28 Status : Normal Position : 1 Connector Name : Port A Enclosure type : SES VendorId is LSI CORP and Product Id is SAS2X36 VendorID and Product ID didnt match FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : 36 Inquiry data : Vendor Identification : LSI CORP Product Identification : SAS2X36 Product Revision Level : 0718 Vendor Specific : x36-55.7.24.1 Number of Voltage Sensors :2 Voltage Sensor :0 Voltage Sensor Status :OK Voltage Value :5020 milli volts Voltage Sensor :1 Voltage Sensor Status :OK Voltage Value :11760 milli volts Number of Power Supplies : 2 Power Supply : 0 Power Supply Status : OK Power Supply : 1 Power Supply Status : OK Number of Fans : 3 Fan : 0 Fan Speed :Low Speed Fan Status : OK Fan : 1 Fan Speed :Low Speed Fan Status : OK Fan : 2 Fan Speed :Low Speed Fan Status : OK Number of Temperature Sensors : 1 Temp Sensor : 0 Temperature : 47 Temperature Sensor Status : OK Number of Chassis : 1 Chassis : 0 Chassis Status : OK Enclosure 2: Device ID : 252 Number of Slots : 8 Number of Power Supplies : 0 Number of Fans : 0 Number of Temperature Sensors : 0 Number of Alarms : 0 Number of SIM Modules : 1 Number of Physical Drives : 0 Status : Normal Position : 1 Connector Name : Unavailable Enclosure type : SGPIO Failed in first Inquiry commnad FRU Part Number : N/A Enclosure Serial Number : N/A ESM Serial Number : N/A Enclosure Zoning Mode : N/A Partner Device Id : Unavailable Inquiry data : Vendor Identification : LSI Product Identification : SGPIO Product Revision Level : N/A Vendor Specific : Exit Code: 0x00 Now, notice that each slot 11 device shows an enclosure ID of 36, I think this is where the discrepancy happens. One should be 36. But the other should be on enclosure 65. Drives in slot 11: Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 5, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 48 WWN: Sequence Number: 11 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : YES Device Firmware Level: A5C0 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8a53 Connected Port Number: 1(path0) Inquiry Data: MJ1311YNG6YYXAHitachi HDS5C3030ALA630 MEAOA5C0 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Enclosure Device ID: 36 Slot Number: 11 Drive's postion: DiskGroup: 19, Span: 0, Arm: 1 Enclosure position: 0 Device Id: 19 WWN: Sequence Number: 4 Media Error Count: 0 Other Error Count: 0 Predictive Failure Count: 0 Last Predictive Failure Event Seq Number: 0 PD Type: SATA Raw Size: 2.728 TB [0x15d50a3b0 Sectors] Non Coerced Size: 2.728 TB [0x15d40a3b0 Sectors] Coerced Size: 2.728 TB [0x15d400000 Sectors] Firmware state: Online, Spun Up Is Commissioned Spare : NO Device Firmware Level: A580 Shield Counter: 0 Successful diagnostics completion on : N/A SAS Address(0): 0x5003048000ee8e53 Connected Port Number: 0(path0) Inquiry Data: MJ1313YNG1VA5CHitachi HDS5C3030ALA630 MEAOA580 FDE Enable: Disable Secured: Unsecured Locked: Unlocked Needs EKM Attention: No Foreign State: None Device Speed: 6.0Gb/s Link Speed: 6.0Gb/s Media Type: Hard Disk Device Drive Temperature :30C (86.00 F) PI Eligibility: No Drive is formatted for PI information: No PI: No PI Drive's write cache : Disabled Drive's NCQ setting : Enabled Port-0 : Port status: Active Port's Linkspeed: 6.0Gb/s Drive has flagged a S.M.A.R.T alert : No Update 06/28/12: I finally have some new information about (what we think) the root cause of this problem so I thought I would share. After getting in contact with a very knowledgeable Supermicro tech, they provided us with a tool called Xflash (doesn't appear to be readily available on their FTP). When we gathered some information using this utility, my colleague found something very strange: root@mogile2 test]# ./xflash.dat -i get avail Initializing Interface. Expander: SAS2X36 (SAS2x36) 1) SAS2X36 (SAS2x36) (50030480:00EE917F) (0.0.0.0) 2) SAS2X36 (SAS2x36) (50030480:00E9D67F) (0.0.0.0) 3) SAS2X36 (SAS2x36) (50030480:0112D97F) (0.0.0.0) This lists the connected enclosures. You see the 3 connected (we have since added a 3rd and a 4th which is not yet showing up) with their respective SAS address / WWN (50030480:00EE917F). Now we can use this address to get information on the individual enclosures: [root@mogile2 test]# ./xflash.dat -i 5003048000EE917F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00EE917F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 5003048000E9D67F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:00E9D67F Enclosure Logical Id: 50030480:0000007F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 [root@mogile2 test]# ./xflash.dat -i 500304800112D97F get exp Initializing Interface. Expander: SAS2X36 (SAS2x36) Reading the expander information.......... Expander: SAS2X36 (SAS2x36) B3 SAS Address: 50030480:0112D97F Enclosure Logical Id: 50030480:0112D97F IP Address: 0.0.0.0 Component Identifier: 0x0223 Component Revision: 0x05 Did you catch it? The first 2 enclosures logical ID is partially masked out where the 3rd one (which has a correct unique enclosure ID) is not. We pointed this out to Supermicro and were able to confirm that this address is supposed to be set during manufacturing and there was a problem with a certain batch of these enclosures where the logical ID was not set. We believe that the RAID controller is determining the ID based on the logical ID and since our first 2 enclosures have the same logical ID, they get the same enclosure ID. We also confirmed that 0000007F is the default which comes from LSI as an ID. The next pointer that helps confirm this could be a manufacturing problem with a run of JBODs is the fact that all 6 of the enclosures that have this problem begin with 00E. I believe that between 00E8 and 00EE Supermicro forgot to program the logical IDs correctly and neglected to recall or fix the problem post production. Fortunately for us, there is a tool to manage the WWN and logical ID of the devices from Supermicro: ftp://ftp.supermicro.com/utility/ExpanderXtools_Lite/. Our next step is to schedule a shutdown of these JBODs (after data migration) and reprogram the logical ID and see if it solves the problem. Update 06/28/12 #2: I just discovered this FAQ at Supermicro while Google searching for "lsi 0000007f": http://www.supermicro.com/support/faqs/faq.cfm?faq=11805. I still don't understand why, in the last several times we contacted Supermicro, they would have never directed us to this article :\

    Read the article

  • sqlalchemy date type in 0.6 migration using mssql

    - by nosklo
    I'm connection to mssql server through pyodbc, via FreeTDS odbc driver, on linux ubuntu 10.04. Sqlalchemy 0.5 uses DATETIME for sqlalchemy.Date() fields. Now Sqlalchemy 0.6 uses DATE, but sql server 2000 doesn't have a DATE type. How can I make DATETIME be the default for sqlalchemy.Date() on sqlalchemy 0.6 mssql+pyodbc dialect? I'd like to keep it as clean as possible. Here's code to reproduce the issue: import sqlalchemy from sqlalchemy import Table, Column, MetaData, Date, Integer, create_engine engine = create_engine( 'mssql+pyodbc://sa:sa@myserver/mydb?driver=FreeTDS') m = MetaData(bind=engine) tb = sqlalchemy.Table('test_date', m, Column('id', Integer, primary_key=True), Column('dt', Date()) ) tb.create() And here is the traceback I'm getting: Traceback (most recent call last): File "/tmp/teste.py", line 15, in <module> tb.create() File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/schema.py", line 428, in create bind.create(self, checkfirst=checkfirst) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1647, in create connection=connection, **kwargs) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1682, in _run_visitor **kwargs).traverse_single(element) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/sql/visitors.py", line 77, in traverse_single return meth(obj, **kw) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/ddl.py", line 58, in visit_table self.connection.execute(schema.CreateTable(table)) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1157, in execute params) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1210, in _execute_ddl return self.__execute_context(context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1268, in __execute_context context.parameters[0], context=context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1367, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/base.py", line 1360, in _cursor_execute context) File "/home/nosklo/.local/lib/python2.6/site-packages/sqlalchemy/engine/default.py", line 277, in do_execute cursor.execute(statement, parameters) sqlalchemy.exc.ProgrammingError: (ProgrammingError) ('42000', '[42000] [FreeTDS][SQL Server]Column or parameter #2: Cannot find data type DATE. (2715) (SQLExecDirectW)') '\nCREATE TABLE test_date (\n\tid INTEGER NOT NULL IDENTITY(1,1), \n\tdt DATE NULL, \n\tPRIMARY KEY (id)\n)\n\n' ()

    Read the article

  • In Python epoll can I avoid the errno.EWOULDBLOCK, errno.EAGAIN ?

    - by davyzhang
    I wrote a epoll wrapper in python, It works fine but recently I found the performance is not not ideal for large package sending. I look down into the code and found there's actually a LOT of error Traceback (most recent call last): File "/Users/dawn/Documents/workspace/work/dev/server/sandbox/single_point/tcp_epoll.py", line 231, in send_now num_bytes = self.sock.send(self.response) error: [Errno 35] Resource temporarily unavailable and previously silent it as the document said, so my sending function was done this way: def send_now(self): '''send message at once''' st = time.time() times = 0 while self.response != '': try: num_bytes = self.sock.send(self.response) l.info('msg wrote %s %d : %r size %r',self.ip,self.port,self.response[:num_bytes],num_bytes) self.response = self.response[num_bytes:] except socket.error,e: if e[0] in (errno.EWOULDBLOCK,errno.EAGAIN): #here I printed it, but I silent it in normal days #print 'would block, again %r',tb.format_exc() break else: l.warning('%r %r socket error %r',self.ip,self.port,tb.format_exc()) #must break or cause dead loop break except: #other exceptions l.warning('%r %r msg write error %r',self.ip,self.port,tb.format_exc()) break times += 1 et = time.time() I googled it, and says it caused by temporarily network buffer run out So how can I manually and efficiently detect this error instead it goes to exception phase? Because it cause to much time to rasie/handle the exception.

    Read the article

  • Problem with WPF Data Binding Defined in Code Not Updating UI Elements

    - by Ben
    I need to define new UI Elements as well as data binding in code because they will be implemented after run-time. Here is a simplified version of what I am trying to do. Data Model: public class AddressBook : INotifyPropertyChanged { private int _houseNumber; public int HouseNumber { get { return _houseNumber; } set { _houseNumber = value; NotifyPropertyChanged("HouseNumber"); } } public event PropertyChangedEventHandler PropertyChanged; protected void NotifyPropertyChanged(string sProp) { if (PropertyChanged != null) { PropertyChanged(this, new PropertyChangedEventArgs(sProp)); } } } Binding in Code: AddressBook book = new AddressBook(); book.HouseNumber = 123; TextBlock tb = new TextBlock(); Binding bind = new Binding("HouseNumber"); bind.Source = book; bind.Mode = BindingMode.OneWay; tb.SetBinding(TextBlock.TextProperty, bind); // Text block displays "123" myGrid.Children.Add(tb); book.HouseNumber = 456; // Text block displays "123" but PropertyChanged event fires When the data is first bound, the text block is updated with the correct house number. Then, if I change the house number in code later, the book's PropertyChanged event fires, but the text block is not updated. Can anyone tell me why? Thanks, Ben

    Read the article

  • Dynamically cast a control type in runtime

    - by JayT
    Hello, I have an application whereby I dynamically create controls on a form from a database. This works well, but my problem is the following: private Type activeControlType; private void addControl(ContainerControl inputControl, string ControlName, string Namespace, string ControlDisplayText, DataRow drow, string cntrlName) { Assembly assem; Type myType = Type.GetType(ControlName + ", " + Namespace); assem = Assembly.GetAssembly(myType); Type controlType = assem.GetType(ControlName); object obj = Activator.CreateInstance(controlType); Control tb = (Control)obj; tb.Click += new EventHandler(Cntrl_Click); inputControl.Controls.Add(tb); activeControlType = controlType; } private void Cntrl_Click(object sender, EventArgs e) { string test = ((activeControlType)sender).Text; //Problem ??? } How do I dynamically cast the sender object to a class that I can reference the property fields of it. I have googled, and found myself trying everything I have come across..... Now I am extremely confused... and in need of some help Thnx JT

    Read the article

  • Sun Storage 2500-M2 Array and Sun Fire X4470 M2 Server

    - by nospam(at)example.com (Joerg Moellenkamp)
    There is some new hardware in the Oracle portfolio. The first one is the Sun Fire X4470 M2 Server. There was a lot of talk about the system before because of benchmark results, but now it's finally announced. Two or four Intel Xeon E7-4800. Up to 1 TB as the system provides 64 DIMM slots with 16 GB DDR DIMMs. The memory is placed on those riser cards right behind the fans of this chassis. Up to 6 internal drives. In a 3 RU package. Another announcement was the Sun Storage 2500 M2 announced yesterday: From 5 to 48 drives (the later number with three expansion trays) for up to 28.8 TB of storage. The array is SAS based internally. You can put 300GB and 600 GB in it. The 2540-M2 provides 4 (8 optional) FC ports with up to 8 GB/sec. The 2530-M2 has 4 SAS2 ports with up to 6 GBit/s. It has 2 integrated controllers providing 2 GB cache protected by a power backup for 72 hours. The controller enables the arrays to deliver 0, 1, 10, 3, 5, 6, (P+Q) RAID levels.

    Read the article

  • Oracle Announces Oracle Big Data Appliance X3-2 and Enhanced Oracle Big Data Connectors

    - by jgelhaus
    Enables Customers to Easily Harness the Business Value of Big Data at Lower Cost Engineered System Simplifies Big Data for the Enterprise Oracle Big Data Appliance X3-2 hardware features the latest 8-core Intel® Xeon E5-2600 series of processors, and compared with previous generation, the 18 compute and storage servers with 648 TB raw storage now offer: 33 percent more processing power with 288 CPU cores; 33 percent more memory per node with 1.1 TB of main memory; and up to a 30 percent reduction in power and cooling Oracle Big Data Appliance X3-2 further simplifies implementation and management of big data by integrating all the hardware and software required to acquire, organize and analyze big data. It includes: Support for CDH4.1 including software upgrades developed collaboratively with Cloudera to simplify NameNode High Availability in Hadoop, eliminating the single point of failure in a Hadoop cluster; Oracle NoSQL Database Community Edition 2.0, the latest version that brings better Hadoop integration, elastic scaling and new APIs, including JSON and C support; The Oracle Enterprise Manager plug-in for Big Data Appliance that complements Cloudera Manager to enable users to more easily manage a Hadoop cluster; Updated distributions of Oracle Linux and Oracle Java Development Kit; An updated distribution of open source R, optimized to work with high performance multi-threaded math libraries Read More   Data sheet: Oracle Big Data Appliance X3-2 Oracle Big Data Appliance: Datacenter Network Integration Big Data and Natural Language: Extracting Insight From Text Thomson Reuters Discusses Oracle's Big Data Platform Connectors Integrate Hadoop with Oracle Big Data Ecosystem Oracle Big Data Connectors is a suite of software built by Oracle to integrate Apache Hadoop with Oracle Database, Oracle Data Integrator, and Oracle R Distribution. Enhancements to Oracle Big Data Connectors extend these data integration capabilities. With updates to every connector, this release includes: Oracle SQL Connector for Hadoop Distributed File System, for high performance SQL queries on Hadoop data from Oracle Database, enhanced with increased automation and querying of Hive tables and now supported within the Oracle Data Integrator Application Adapter for Hadoop; Transparent access to the Hive Query language from R and introduction of new analytic techniques executing natively in Hadoop, enabling R developers to be more productive by increasing access to Hadoop in the R environment. Read More Data sheet: Oracle Big Data Connectors High Performance Connectors for Load and Access of Data from Hadoop to Oracle Database

    Read the article

  • How do I roll back to the shipped version of Thunderbird?

    - by kallakafar
    I was using thunderbird v15.0 on ubuntu 12.04 LTS till now, and have the lightning extension installed to manage calendar within thunderbird application. everything was working fine until i decided to update thunderbird to the latest version 16.0 from ubuntu repository. installation was successful, and the profile everything was taken care of perfectly, except that now lightning is not working - it is disabled as lightning v1.7 is NOT compatible with latest thunderbird v16 yet. As a result i am at loss with all my scheduling. now, i would like to go back to thunderbird v15 so that i can use lightning. ubuntu repository only gives TB v16 now. on mozilla site, they are still giving v15 for linux, so i downloaded the tarball and uncompressed using command line. now i have a folder called thunderbird. there are no readme/ configuration files. there are following 'executable files' inside this folder: crashreporter, mozilla-remote-client, plugin-container, thunderbird and thunderbird-bin. i tried invoking thunderbird and thunderbird-bin from command line using sudo, still nothing is opening up. i have execute permissions for this folder contents. i m quite new to linux. please let me know why i m not able to launch thunderbird. did i install it incorrectly? please let me know if i can get a .deb package for TB v15.

    Read the article

  • Is RAID 0 or JBOD better for home media server?

    - by Donald Hughes
    I have an external two-bay drive enclosure (the OWC Mercury Elite-AL Pro) connected to a Mac Mini (my home media server) over FireWire 800. I'm streaming media to other computers in the house over wired gigabit. I have two 1.5 TB drives that I'm using independently right now. The media is on one, and I'm mirroring the files to the other drive at night as a backup. But as I approach filling up the drive I'm wanting to span those two drives together to give me a total of about 3 TB, and then buy another drive for backups. The external enclosure supports both RAID 0 and JBOD, but I'm not clear on which would be better in this situation. Would RAID 0 provide any performance improvements over JBOD for streaming video (possibly several streams at once? How does each affect the MTBF of the drives? In general, should I choose RAID 0, JBOD, or keep them independent?

    Read the article

  • Path of md device wrong after reboot

    - by flammi88
    I have to set up a software raid (level1) on a Ubuntu server 12.04. It should serve files in the network via Samba. The server has the following disks: 250gb Sata hdd (Ubuntu is installed on that drive) 2 TB Sata hdd (first disk in raid array, data disk) 2 TB Sata hdd (second data disk) I created one partition on every data disk with the type Linux raid autodetect. In the second step I created the raid1 with the following command: mdadm --create /dev/md0 --level=1 --raid-devices=2 /dev/sdb1 /dev/sdc1 After that, I added the array to the mdconf: mdadm --examine --scan >> /etc/mdadm/mdadm.conf The problem is: After a reboot the array is not available on the path /dev/md0. Instead of that it gets reassembled as /dev/md/0 but it is not very reliable. Has anybody a solution for this issue?

    Read the article

  • Strategy for using snapshots to back up Ubuntu Linux server?

    - by MountainX
    I need some backup advice for my home file server. Here are the mount points, volume groups, logical volumes and used/total space of all the volumes on my Ubuntu 8.10 home file server. / vgA/lvRoot [7.5G/50G] /tmp vgB/lvTmp [195M/30G] /var vgB/lvVar [780M/30G] swap vgB/lvSwap [16.00 GB] /media1 vgC/lvMedia1 [400G/975G] /media2 vgC/lvMedia2 [75G/295G] /boot partition (no volume group) [95M/200M] /video partition (no volume group) [450G/950G] /backups vgD/lvBackupTarget [800G/925G] /home vgE/lvHome [85G/200G] I have just added a 2.0 TB external USB drive that I would like to use to backup everything. (It will be a close fit to get it all on one 2.0 TB drive. I actually have a 2nd external USB drive if needed.) I'd like to backup "/", var, /media1, media2 and /home. I'll deal with /boot and /video separately since they are not logical volumes. For all the logical volumes I'm anticipating taking snapshots and then copying those snapshots to the 2.0 TB external USB drive. I have never done a task like that before. If I do that, I could use the tutorial I found here: http://www.howtoforge.com/linux_lvm_snapshots My questions are: What is the best overall strategy? Is it LVM snapshots, as I'm assuming? How should I prepare, subdivide and mount the 2.0 TB external USB drive? 2.a. Should I create one or more regular partitions or should I create a physical volume with one or more logical volumes? 2.b. Would it be advisable to extactly mirror the source pv/lv layout on the external drive, and if so, is this a good strategy? What's the best way to get the snapshots onto the external drive? dd? Even though this is a strategy question, feedback with actual commands is appreciated. I need step-by-step cookbook-style help because I don't do much server admin work. (Background: This is a home file server that I have rarely had to touch in about 2 years. It has done its job without much intervention. The really old PC that I used to back everything up recently failed, so I'm replacing that with the external USB drive(s) and I'd like to upgrade my backup strategy at the same time. Previously, I just copied stuff from /backups over to the other computer and that would not have made things very easy in a real restore situation. The /backups mount point contains backup copies of "most" of the important data on a file by file basis, but it does not contain copies of /boot, etc. BTW, the actual internal HDD that holds /backups is separate from the other storage devices.) EDIT: I'll propose a strategy... The idea came from a comment here: LVM mirroring VS RAID1 "LVM mirrors are for replication of a logical volume to a different physical volume. It's essentially meant to "move the data to a different disk". The mirror is then broken..." That would fit my requirements well. Here is an ideal situation: establish the LV mirror on the external drive break the link with the mirror create a (persistent) snapshot on the mirror after a week, resync the mirror with the original source and update the mirror break the link and create another snapshot on the mirror. Obviously, the mirror will be like a weekly full backup. And the snapshots on the mirror will represent earlier points in time. If this would work and if it would be time efficient, it would give a nice full & differential type backup on the external drive based on LVM. I have not heard of a strategy like this before. Will it work? Could it be scripted? Thoughts? EDIT 2: Creating Portable DiskSafes With LoopbackFS And LVM Snapshots This article seems intriguing: http://www.howtoforge.com/creating-portable-disksafes-with-loopbackfs-and-lvm-snapshots Unfortunately, I don't understand exactly how to map those ideas to the strategy I'm proposing above. I'm going to ask this last bit as a separate question. I will leave my original question in place because I still desire feedback on the overall best strategy. At this moment I'm assuming it is LVM mirroring in the style of "Creating Portable DiskSafes with LVM Snapshots" but that might be wrong.

    Read the article

  • Repair GRUB bootloader for multiboot system

    - by user1715324
    I have two hard disks – one a 1TB and the other a 250 GB. I installed the OSes in the following order: Windows 7 on the first hard disk (1 TB) After that Kubuntu 12.04 on a partition (/dev/sdb7) on the second hard disk (250 GB) The second drive also contains an NTFS partition. Now, kubuntu's bootloader was installed on the second hard drive's MBR (and successfully detected Windows 7). So, whenever I wanted to load Windows I used to select the first hard disk from the BIOS boot menu and the second hard disk whenever I wanted to load kubuntu. I know I could have set the second hard disk as the default drive, still I prefered this method. The problem started when I installed Linux Mint 13 on the second hard disk (/dev/sdb3) and overwrote kubuntu's original MBR. Now, the GRUB just detects Mint and Windows. The MBR on the 1 TB hard disk is untouched. Is there a way I can modify the MBR on the second hard disk now so that it will show kubuntu and Mint both?

    Read the article

  • How to stop S.M.A.R.T check running for external hard drive

    - by gotqn
    I am using Windows 7 and I have enabled my "S.M.A.R.T" check, too. I have bought an external 1 TB USB hard drive. When it is connected with my computer, it delays my machine initial start with about 4-5 minutes. I was very confused at the begging, then disable the "S.M.A.R.T" check and see that the computer runs normally after restart. So, here is what I want to know: Why the check is turn for the external disk? Is there a way to use the "S.M.A.R.T" check only for my main hard drive? Why it takes 4 to 5 minutes more, as my both hard drives - internal/external are 1 TB?

    Read the article

  • Linking FreeNAS Boxes

    - by John
    I have two different FreeNAS boxes that are being used for storage. It is possible to link them in a manner that the linked connection can be presented as a single storage destination for clients? For example, I have one FreeNAS installation that has 6 TB of storage and another one that has 4 TB of storage. I would like to be able to present or have a single mapped drive for clients that would be able to span across both of them; however, the client would only see it as one drive. Is this possible? If not, is there a different way I should be approaching this problem? Is there a different OS distribution that would work better for this?

    Read the article

  • How to Recover HDD Formatted by "Create a Recovery Drive" Tool of Windows 8.1?

    - by ide
    I have 2 TB USB HDD which had these drives F: about 1 TB with 750 GB data H: about 120 GB with 60 GB data I: about 780 GB with 250 GB data (For TV: It was raw in Windows but visible in the Smart TV) I took 521 MB from last part of H to get new G drive. Then I run "Create a Recovery Drive" tool of Windows 8.1 and chose G drive. It said all data in the drive will be deleted. I thought it is just G drive but it deleted my whole HDD. It created 32 GB new F drive with writing 337 MB on it and rest of HDD is unallocated. I tried these programs to get my first 3 drives but non of them helped for getting 1st partition. TestDisk MiniTool Partition Wizard Home Edition EaseUS Partition Master 9.2.2 (I deleted new F drive volume because it scans only unallocated part) Recuva PC Inspector File Recovery

    Read the article

  • Troubleshooting major performance issue: Is culprit Intel RST, Hard drive, or something else?

    - by Sean Killeen
    The Setup I have the following components that come into play in this situation: ASUS P8Z68 V/PRO motherboard a RAID1 configuration (1x 1TB drive, 1 x 2TB drive -- I explain below), accelerated with an SSD using Intel's RST software, and 1 TB drive standing by as a spare. Core i7 2600k 32 GB RAM Windows 8.1 This box was designed to be beast, and until just recently, was very good at being just that. What's Happening The system has slowed to a crawl whenever it touches the disk. Things appear to work at normal speed when dealing with memory. For example, typing this is fine, but saving it to disk from notepad gave me a 5-7 second pause when clicking save. The disks appear to be at 100% all the time (e.g. the light on the disk access on the PC is solidly on -- not even any flashing) In ProcExp it appears that the disk is barely being utilized at all: Intel RST reports that everything is fine: Other Details Prior to this happening, RST had reported that my drives were failing (one went bad, one was throwing SMART events). This made sense; they were at the tail end of their warranty and the PC is on almost all the time. I RMA'd the drives via Seagate. In the meantime, I'd purchased a 2TB drive because I didn't realize that the 1TB drives were under warranty. I figured I'd replace the other 1 TB drive with another 2 TB when it died but then discovered the warranty. AFAIK, I haven't done any major updates since 8.1 and it worked fine after those. Question(s) How can I troubleshoot this? What is the best way to try to figure out why disks are being maxed out despite the OS reporting barely any disk usage and that everything is OK? Given the failures, etc. that I describe above, is it possible that the problem could be the I/O on the motherboard itself? If so, how would I even be able to diagnose it? I'm betting the drives that Seagate gave me are refurbished (didn't think to look; that's dumb). Is it possible that the same model drive, refurbished, could somehow cause this? In terms of how RAID1 works, is it possible that one drive is "falling behind" somehow, and that the RAID1 is constantly trying to fix the mirroring? If so, this seems like Intel RST would report on it, but I wanted to consider it as an option.

    Read the article

  • Performance Enhancement in Full-Text Search Query

    - by Calvin Sun
    Ever since its first release, we are continuing consolidating and developing InnoDB Full-Text Search feature. There is one recent improvement that worth blogging about. It is an effort with MySQL Optimizer team that simplifies some common queries’ Query Plans and dramatically shorted the query time. I will describe the issue, our solution and the end result by some performance numbers to demonstrate our efforts in continuing enhancement the Full-Text Search capability. The Issue: As we had discussed in previous Blogs, InnoDB implements Full-Text index as reversed auxiliary tables. The query once parsed will be reinterpreted into several queries into related auxiliary tables and then results are merged and consolidated to come up with the final result. So at the end of the query, we’ll have all matching records on hand, sorted by their ranking or by their Doc IDs. Unfortunately, MySQL’s optimizer and query processing had been initially designed for MyISAM Full-Text index, and sometimes did not fully utilize the complete result package from InnoDB. Here are a couple examples: Case 1: Query result ordered by Rank with only top N results: mysql> SELECT FTS_DOC_ID, MATCH (title, body) AGAINST ('database') AS SCORE FROM articles ORDER BY score DESC LIMIT 1; In this query, user tries to retrieve a single record with highest ranking. It should have a quick answer once we have all the matching documents on hand, especially if there are ranked. However, before this change, MySQL would almost retrieve rankings for almost every row in the table, sort them and them come with the top rank result. This whole retrieve and sort is quite unnecessary given the InnoDB already have the answer. In a real life case, user could have millions of rows, so in the old scheme, it would retrieve millions of rows' ranking and sort them, even if our FTS already found there are two 3 matched rows. Apparently, the million ranking retrieve is done in vain. In above case, it should just ask for 3 matched rows' ranking, all other rows' ranking are 0. If it want the top ranking, then it can just get the first record from our already sorted result. Case 2: Select Count(*) on matching records: mysql> SELECT COUNT(*) FROM articles WHERE MATCH (title,body) AGAINST ('database' IN NATURAL LANGUAGE MODE); In this case, InnoDB search can find matching rows quickly and will have all matching rows. However, before our change, in the old scheme, every row in the table was requested by MySQL one by one, just to check whether its ranking is larger than 0, and later comes up a count. In fact, there is no need for MySQL to fetch all rows, instead InnoDB already had all the matching records. The only thing need is to call an InnoDB API to retrieve the count The difference can be huge. Following query output shows how big the difference can be: mysql> select count(*) from searchindex_inno where match(si_title, si_text) against ('people')  +----------+ | count(*) | +----------+ | 666877 | +----------+ 1 row in set (16 min 17.37 sec) So the query took almost 16 minutes. Let’s see how long the InnoDB can come up the result. In InnoDB, you can obtain extra diagnostic printout by turning on “innodb_ft_enable_diag_print”, this will print out extra query info: Error log: keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 2 secs: row(s) 666877: error: 10 ft_init() ft_init_ext() keynr=2, 'people' NL search Total docs: 10954826 Total words: 0 UNION: Searching: 'people' Processing time: 3 secs: row(s) 666877: error: 10 Output shows it only took InnoDB only 3 seconds to get the result, while the whole query took 16 minutes to finish. So large amount of time has been wasted on the un-needed row fetching. The Solution: The solution is obvious. MySQL can skip some of its steps, optimize its plan and obtain useful information directly from InnoDB. Some of savings from doing this include: 1) Avoid redundant sorting. Since InnoDB already sorted the result according to ranking. MySQL Query Processing layer does not need to sort to get top matching results. 2) Avoid row by row fetching to get the matching count. InnoDB provides all the matching records. All those not in the result list should all have ranking of 0, and no need to be retrieved. And InnoDB has a count of total matching records on hand. No need to recount. 3) Covered index scan. InnoDB results always contains the matching records' Document ID and their ranking. So if only the Document ID and ranking is needed, there is no need to go to user table to fetch the record itself. 4) Narrow the search result early, reduce the user table access. If the user wants to get top N matching records, we do not need to fetch all matching records from user table. We should be able to first select TOP N matching DOC IDs, and then only fetch corresponding records with these Doc IDs. Performance Results and comparison with MyISAM The result by this change is very obvious. I includes six testing result performed by Alexander Rubin just to demonstrate how fast the InnoDB query now becomes when comparing MyISAM Full-Text Search. These tests are base on the English Wikipedia data of 5.4 Million rows and approximately 16G table. The test was performed on a machine with 1 CPU Dual Core, SSD drive, 8G of RAM and InnoDB_buffer_pool is set to 8 GB. Table 1: SELECT with LIMIT CLAUSE mysql> SELECT si_title, match(si_title, si_text) against('family') as rel FROM si WHERE match(si_title, si_text) against('family') ORDER BY rel desc LIMIT 10; InnoDB MyISAM Times Faster Time for the query 1.63 sec 3 min 26.31 sec 127 You can see for this particular query (retrieve top 10 records), InnoDB Full-Text Search is now approximately 127 times faster than MyISAM. Table 2: SELECT COUNT QUERY mysql>select count(*) from si where match(si_title, si_text) against('family‘); +----------+ | count(*) | +----------+ | 293955 | +----------+ InnoDB MyISAM Times Faster Time for the query 1.35 sec 28 min 59.59 sec 1289 In this particular case, where there are 293k matching results, InnoDB took only 1.35 second to get all of them, while take MyISAM almost half an hour, that is about 1289 times faster!. Table 3: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county California 0.93 sec 32.03 sec 34.4 President united states of America 2.5 sec 36.98 sec 14.8 Table 4: SELECT title and text with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, si_title, si_text, ... as rel FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.61 sec 41.65 sec 68.3 family film 1.15 sec 47.17 sec 41.0 Pizza restaurant orange county california 1.03 sec 48.2 sec 46.8 President united states of america 2.49 sec 44.61 sec 17.9 Table 5: SELECT ID with ORDER BY and LIMIT CLAUSE for selected terms mysql> SELECT <ID>, match(si_title, si_text) against(<TERM>) as rel  FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) ORDER BY rel desc LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.5 sec 5.05 sec 10.1 family film 0.95 sec 25.39 sec 26.7 Pizza restaurant orange county califormia 0.93 sec 32.03 sec 34.4 President united states of america 2.5 sec 36.98 sec 14.8 Table 6: SELECT COUNT(*) mysql> SELECT count(*) FROM si_<TB> WHERE match(si_title, si_text) against (<TERM>) LIMIT 10; Term InnoDB (time to execute) MyISAM(time to execute) Times Faster family 0.47 sec 82 sec 174.5 family film 0.83 sec 131 sec 157.8 Pizza restaurant orange county califormia 0.74 sec 106 sec 143.2 President united states of america 1.96 sec 220 sec 112.2  Again, table 3 to table 6 all showing InnoDB consistently outperform MyISAM in these queries by a large margin. It becomes obvious the InnoDB has great advantage over MyISAM in handling large data search. Summary: These results demonstrate the great performance we could achieve by making MySQL optimizer and InnoDB Full-Text Search more tightly coupled. I think there are still many cases that InnoDB’s result info have not been fully taken advantage of, which means we still have great room to improve. And we will continuously explore the area, and get more dramatic results for InnoDB full-text searches. Jimmy Yang, September 29, 2012

    Read the article

  • Best way to use my windows box as a backup?

    - by DFischer
    I put a 1.5 TB HD in my Windows 7 box and my main computer is a MBP. I have a lot of professional files/folders in a FireWire 800 external HD connected to the MBP and I want to use the 1.5 TB HD in my Windows 7 box as a backup for both the HD and MBP. Right now I am just copying files manually to the HD over the network and that's very slow and open to failure (not rsync'd.) Anyone suggest some appropriate solutions? Should I just figure out how to setup RSync on the windows box or is there a better alternative? Thanks!

    Read the article

  • Best way to use my windows box as a backup?

    - by user29336
    I put a 1.5 TB HD in my Windows 7 box and my main computer is a MBP. I have a lot of professional files/folders in a FireWire 800 external HD connected to the MBP and I want to use the 1.5 TB HD in my Windows 7 box as a backup for both the HD and MBP. Right now I am just copying files manually to the HD over the network and that's very slow and open to failure (not rsync'd.) Anyone suggest some appropriate solutions? Should I just figure out how to setup RSync on the windows box or is there a better alternative? Thanks!

    Read the article

  • How many hours of use before I need to clean a tape drive?

    - by codeape
    I do backups to a HP Ultrium 2 tape drive (HP StorageWorks Ultrium 448). The drive has a 'Clean' LED that supposedly will light up or blink when the drive needs to be cleaned. The drive has been in use since october 2005, and still the 'Clean' light has never been lit. The drive statistics are: Total hours in use: 1603 Total bytes written: 19.7 TB Total bytes read: 19.3 TB My question is: How many hours of use can I expect before I need to clean the drive? Edit: I have not encountered any errors using the drive. I do restore tests every two months, and every backup is verified. Edit 2: The user manual says: "HP StorageWorks Ultrium tape drives do not require regular cleaning. An Ultrium universal cleaning cartridge should only be used when the orange Clean LED is flashing." Update: It is now May 2010 (4.5 years of use), and the LED is still off, I have not cleaned, backups verify and regular restore tests are done.

    Read the article

  • Limit disk I/O one program creates?

    - by Posipiet
    Hardware: one virtualization server. Dual Nehalem, 24GB RAM, 2 TB mirrored HD. Software: Debian, KVM, virt-manager on the server with several virtual machines that use Linux too. 2 TB Disk is a big LVM, each VM gets a logical volume and makes its own partitions in that. Problem: One of the programs that runs on one of the VMs creates huge disk load. This never was an issue, because the program never ran on such a powerful hardware. Now the CPUs are fast, and lots of I/O is the result. We cant do much against that at the moment, because the tool is a black box. On the other hand, the speedy computation is welcome. The program creates about 5 GB of temp files which get overwritten during the next iteration. Question: How can we limit the disk I/O for the process?

    Read the article

  • What decent email client would you recommend (at least better than Thunderbird)?

    - by matteo
    I've used Thunderbird for years. I keep a huge number of emails. I move them to folders to organize or archive them, but I don't delete anything so I have hundreds of thousands of messages. I like the way TB is conceived, and the way it works as long as the volume of data is small. But it just doesn't scale. It has a lot of ridiculous design flaws such that, for example, any time consuming operation blocks the whole UI completely (and you don't even know for how long) as if everything was implemented in a single monolythic all-tasks-are-blocking way. I'm tired of it. So what is the alternative that you would recommend as an email client program with all the usual basic features one expects from any email client program? Important: I mainly use POP3, much much more than IMAP, and my main account is on gmail. This question is not intended to be a rant against TB (I admit it is, as a side effect); I have highlighted its weaknesses BECAUSE the answer I'm looking for is a recomendation for a program that doesn't suffer from these issues.

    Read the article

  • "Verifying DMI Pool" hang caused by raid array..

    - by Ling
    Hi Experts, I have a problem, I obtained a new server with 4 hard drives (2 500 gig, 2 two TB), and an adeptec RAID card. I arranged them in two arrays with RAID 1 (500 gigs together as primary and the 2 TB drives for lots of data). When both arrays are configured, the server hangs while booting at message "Verifying DMI Pool", however if I remove the second array from the configuration the server boots fine. I have checked they are both on different channels, I have disabled all other peripherals from the boot menu and ensured the hard drive is #1. I have booted into the linux rescue mode and checked that it is reading both arrays fine. What else could be causing these problems? Thanks

    Read the article

< Previous Page | 2 3 4 5 6 7 8 9 10 11 12 13  | Next Page >