Search Results

Search found 21008 results on 841 pages for 'chuzein part ii'.

Page 146/841 | < Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >

  • How do I start DB2 on an Amazon EC2 Volume?

    - by Spike Williams
    I've got an instance of DB2 installed as part of an IBM WebSphere Portal development AMI on the Amazon EC2 cloud. Its installed a separate, persistent file system from the rest of the AMI. Yesterday, the AMI was terminated, and DB2 went down as part of that. It was not shut down cleanly, just terminated. Today, I am trying to restart the WebSphere portal server, which needs to connect to the DB2 instance. But the DB2 instance is down. So I need to restart my DB2 instance, but how to do that is not immediately obvious. Can someone tell me what I need to run to get it going again? OS is SuSE Linux, DB2 version is 9.1

    Read the article

  • CISCO WS-C4948-10GE SFP+?

    - by Brian Lovett
    I have a pair of CISCO WS-C4948-10GE's that we need to connect to a new switch that has SFP+ and QSFP ports. Is there an X2 module that supports this? If so, can someone name the part number that will work? I have found some information, but want to make sure I have the correct part number for our exact switches. Per the discussion in the comments, I believe I have a better understanding of things now. Would I be correct in saying that I need these SR modules on the cisco side: [url]http://www.ebay.com/itm/Cisco-original-used-X2-10GB-SR-V02-/281228948970?pt=LH_DefaultDomain_0&hash=item417a8d35ea[/url] Then, on the switch with sfp+ ports, I can pick up an SR to SFP+ transceiver like this: [url]http://www.advantageoptics.com/SFP-10G-SR_lp.html?gclid=CKP4s-G27b4CFXQiMgodLD8AQA[/url] and finally, an SR calbe such as this: [url]http://www.colfaxdirect.com/store/pc/viewPrd.asp?idproduct=1551[/url] Am I on the right track here?

    Read the article

  • How to find out if the screen is WLED or RGBLED on Dell Studio XPS 16?

    - by Abhijeet
    I just ordered the Dell Studio XPS 16 with i7 and 16" RGBLED screen. This upgrade from WLED LCD to RGBLED LCD charged me more $$. But now when I view the order online, it lists this part as "320-8335 Premium FHD WLED Display, Obsidian Black, 2.0 MP Webcam". When called Dell, the rep says this part is RGBLED screen and the "premium" is for RGBLED. I want to make sure they are shipping me the RGBLED screen and not the WLED. Is there any way to verify this after I receive (either from Device Manager or BIOS or somewhere in system settings)? Also, is there any published specifications / criteria that we can run (some third party software) on this monitor which can tell if this is WLED or RGBLED?

    Read the article

  • What exactly does the condition in the MIT license imply?

    - by Yannbane
    To quote the license itself: Copyright (C) [year] [copyright holders] Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. I am not exactly sure what the bold part implies. Lets say that I'm creating some library, and I license it under the MIT license. Someone decides to fork that library and to create a closed-source, commercial version. According to the license, he should be free to do that. However, what does he additionally need to do under those terms? Credit me as the creator? I guess the "above copyright notice" refers to the "Copyright (C) [..." part, but, wouldn't that list me as the author of his code (although I technically typed out the code)? And wouldn't including the "permission notice" in what is now his library practically license it under the same conditions that I licensed my own library in? Or, am I interpreting this incorrectly? Does that refer to my obligations to include the copyright and the permission notice?

    Read the article

  • Welcome to the newly merged JCP EC!

    - by Heather VanCura
    As part of the JCP.Next effort, the second JSR as part of the JCP program reforms, JSR 355, Executive Committee (EC) Merge, will take effect on Tuesday as JCP 2.9. The first in the effort was JSR 348, which took effect as JCP 2.8 in October 2011. EC members guide the evolution of the Java technologies by approving and voting on all technology proposals (Java Specification Requests, or JSRs). They are also responsible for defining the JCP's rules of governance and the legal agreement between members and the organization. They provide guidance to the Program Management Office (PMO) and they represent the interests of the JCP to the broader community. Starting on Tuesday, 13 November, JCP 2.9 is in effect, and the EC is merged from two ECs -- one representing Java SE/EE and one representing Java ME -- to one merged EC. IBM and Oracle each gave up one of their two seats (one per EC) and the terms expired for four members who did not run for re-election: AT&T, Deutsch Telekom, Siemens and Vodafone. All four remain JCP members. In addition, the seat occupied by RIM was forfeited due to lack of participation in October 2012. The JCP values the organizations and representatives for their contribution to the JCP EC, and looks forward to their continued participation in the JCP Program. The complete listing of the EC, 24 members total at the moment, is now available. We asked the two newcomers to the EC, Cinterion and CloudBees, and the re-elected London Java Community, to comment on their plans for their term in the EC. Read about their plans in the article published on JCP.org, "JCP 2.9 with a Merged EC Takes Effect 13 November". Also, plan to attend the public (open to all community members) EC Meeting planned for 20 November at 15:00 PST.  Details will be posted here and on the JCP.org home page next week.

    Read the article

  • What's the best practice to do SOA exception handling?

    - by sun1991
    Here's some interesting debate going on between me and my colleague when coming to handle SOA exceptions: On one side, I support what Juval Lowy said in Programming WCF Services 3rd Edition: As stated at the beginning of this chapter, it is a common illusion that clients care about errors or have anything meaningful to do when they occur. Any attempt to bake such capabilities into the client creates an inordinate degree of coupling between the client and the object, raising serious design questions. How could the client possibly know more about the error than the service, unless it is tightly coupled to it? What if the error originated several layers below the service—should the client be coupled to those lowlevel layers? Should the client try the call again? How often and how frequently? Should the client inform the user of the error? Is there a user? By having all service exceptions be indistinguishable from one another, WCF decouples the client from the service. The less the client knows about what happened on the service side, the more decoupled the interaction will be. On the other side, here's what my colleague suggest: I believe it’s simply incorrect, as it does not align with best practices in building a service oriented architecture and it ignores the general idea that there are problems that users are able to recover from, such as not keying a value correctly. If we considered only systems exceptions, perhaps this idea holds, but systems exceptions are only part of the exception domain. User recoverable exceptions are the other part of the domain and are likely to happen on a regular basis. I believe the correct way to build a service oriented architecture is to map user recoverable situations to checked exceptions, then to marshall each checked exception back to the client as a unique exception that client application programmers are able to handle appropriately. Marshall all runtime exceptions back to the client as a system exception, along with the stack trace so that it is easy to troubleshoot the root cause. I'd like to know what you think about this? Thank you.

    Read the article

  • Filter any mailing list in GMail using the "list:" meta-data

    - by Binary255
    Hi, If I ask GMail to create a filter for a mailing list it creates a rule containing list:mailing-list-identifier, in the case of the NAnt mailing list it wrote: Has the words: list: "nant-users.lists.sourceforge.net" Is there a way to filter any mailing list? I would like to filter conversations from any mailing list containing answers to things I've previously asked (or answered to). Part of that filter is identifying "anything which is part of a mailing list" and I'm wondering if there is a better way than adding another label to all mailing list posts (which is cumbersome).

    Read the article

  • Database Driven Web Application, C# Front-End and F# Back-End meaning

    - by user1473053
    Hi I am an intern working with ASP.NET. My current task is to make a website which will incorporate some jquery viewing features. This project seems to me will be primarily dealing with reading data from a database and making graphs out of them. This will require me to make custom queries from whatever the client is looking at. I think it is going to be what this guy calls an Ad Hoc Query tool My plan for this is to make it a database-driven website. So I can utilize the jquery dynamic viewing capabilities. I stumbled upon the functional programming paradigm and found F#. I read that because of it's functional programming paradigm, it makes it a good language to do asynchronous functions. I read about how you can use this with LINQ to SQL and how easy it is to make queries without actually putting the query language in. I understand the concept of the MVC design pattern. But I don't understand what they mean about C# being the front-end and F# being the back-end. Can someone clarify this to me? Also what are your thoughts about doing this project in this way? Any comments and thoughts are greatly appreciated. I feel as if learning F# will be a great learning experience for me. My guess is that the F# back-end is like the part where it controls the calls to the database. F# is possibly the model part of the design pattern. And C# is the controller. So HTML, Javascript and Jquery stuff will be my View design pattern. Clarify please?

    Read the article

  • CodePlex Daily Summary for Saturday, September 15, 2012

    CodePlex Daily Summary for Saturday, September 15, 2012Popular ReleasesMCEBuddy 2.x: MCEBuddy 2.2.15: Changelog for 2.2.15 (32bit and 64bit) 1. Added support for %originalfilepath% to get the source file full path. Used for custom commands only. 2. Added support for better parsing of Media Portal XML files to extract ShowName and Episode Name and download additional details from TVDB (like Season No, Episode No etc). 3. Added support for TVDB seriesID in metadata 4. Added support for eMail non blocking UI testCrashReporter.NET : Exception reporting library for C# and VB.NET: CrashReporter.NET 1.2: *Added html mail format which shows hierarchical exception report for better understanding.VCC: Latest build, v2.3.00914.0: Automatic drop of latest buildScarlet Road: Scarlet Road Test Build 007: Playable game. Includes source.DotNetNuke Search Engine Sitemaps Provider: Version 02.00.00: New release of the Search Engine Sitemap Providers New version - not backwards compatible with 1.x versions New sandboxing to prevent exceptions in module providers interfering with main provider Now installable using the Host->Extensions page New sitemaps available for Active Forums and Ventrian Property Agent Now derived from DotNetNuke Provider base for better framework integration DotNetNuke minimum compatibility raised to DNN 5.2, .NET to 3.5PDF Viewer Web part: PDF Viewer Web Part: PDF Viewer Web PartChris on SharePoint Solutions: View Grid Banding - v1.0: Initial release of the View Creation and Management Page Column Selector Banding solution.$linq - A Javascript LINQ library: Version 1.0: Version 1.0 Initial releasePowerConverter: PowerConverter Beta: This is the first release of PowerConverter. Allows for converting PE code to Power code.NetView Control for Microsoft Access: DevVersion 19852 - More Databinding and Resizing: NetView Renamed event GotFocus to Clicked Added events Clicked/DoubleClicked Added event BackgroundDoubleClicked Changed nomenclature for world coordinates to (Position1, Position2)|Extent1xExtent2 Renamed Locked -> Readonly Added properties Minimum1/Maximum1 and Minimum2/Maximum2 Removed NetView.DeviceDefinitionArea, obsolete properties Support for resizing Added properties BackColor and BorderColor NetView Properties form added binding field for BusinessId propagate erro...Runtime Dynamic Data Model Builder: Main Library Version 1.0.0.0: Main Library Version 1.0.0.0Microsoft Ajax Minifier: Microsoft Ajax Minifier 4.67: Fix issue #18629 - incorrectly handling null characters in string literals and not throwing an error when outside string literals. update for Issue #18600 - forgot to make the ///#DEBUG= directive also set a known-global for the given debug namespace. removed the kill-switch for disregarding preprocessor define-comments (///#IF and the like) and created a separate CodeSettings.IgnorePreprocessorDefines property for those who really need to turn that off. Some people had been setting -kil...Lakana - WPF Framework: Lakana V2: Lakana V2 contains : - Lakana WPF Forms (with sample project) - Lakana WPF Navigation (with sample project)Microsoft SQL Server Product Samples: Database: OData QueryFeed workflow activity: The OData QueryFeed sample activity shows how to create a workflow activity that consumes an OData resource, and renders entity properties in a Microsoft Excel 2010 worksheet or Microsoft Word 2010 document. Using the sample QueryFeed activity, you can consume any OData resource. The sample activity uses LINQ to project OData metadata into activity designer expression items. By setting activity expressions, a fully qualified OData query string is constructed consisting of Resource, Filter, Or...Arduino for Visual Studio: Arduino 1.x for Visual Studio 2012, 2010 and 2008: Register for the visualmicro.com forum for more news and updates Version 1209.10 includes support for VS2012 and minor fixes for the Arduino debugger beta test team. Version 1208.19 is considered stable for visual studio 2010 and 2008. If you are upgrading from an older release of Visual Micro and encounter a problem then uninstall "Visual Micro for Arduino" using "Control Panel>Add and Remove Programs" and then run the install again. Key Features of 1209.10 Support for Visual Studio 2...Social Network Importer for NodeXL: SocialNetImporter(v.1.5): This new version includes: - Fixed the "resource limit" bug caused by Facebook - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuAcDown????? - AcDown Downloader Framework: AcDown????? v4.1: ??●AcDown??????????、??、??、???????。????,????,?????????????????????????。???????????Acfun、????(Bilibili)、??、??、YouTube、??、???、??????、SF????、????????????。 ●??????AcPlay?????,??????、????????????????。 ● AcDown??????????????????,????????????????????????????。 ● AcDown???????C#??,????.NET Framework 2.0??。?????"Acfun?????"。 ????32??64? Windows XP/Vista/7/8 ???? 32??64? ???Linux ????(1)????????Windows XP???,?????????.NET Framework 2.0???(x86),?????"?????????"??? (2)???????????Linux???,????????Mono?? ??...Move Mouse: Move Mouse 2.5.2: FIXED - Minor fixes and improvements.MVC Controls Toolkit: Mvc Controls Toolkit 2.3: Added The new release is compatible with Mvc4 RTM. Support for handling Time Zones in dates. Specifically added helper methods to convert to UTC or local time all DateTimes contained in a model received by a controller, and helper methods to handle date only fileds. This together with a detailed documentation on how TimeZones are handled in all situations by the Asp.net Mvc framework, will contribute to mitigate the nightmare of dates and timezones. Multiple Templates, and more options to...DNN Metro7 style Skin package: Metro7 style Skin for DotNetNuke 06.02.00: Maintenance Release Changes on Metro7 06.02.00 Fixed width and height on the jQuery popup for the Editor. Navigation Provider changed to DDR menu Added menu files and scripts Changed skins to Doctype HTML Changed manifest to dnn6 manifest file Changed License to HTML view Fixed issue on Metro7/PinkTitle.ascx with double registering of the Actions Changed source folder structure and start folder, so the project works with the default DNN structure on developing Added VS 20...New ProjectsBizTalk Zombie Management: A powerful tool to handle zombie. As a service you can monitor all zombie instance and process them. For the moment only file is supporting.bxkw8: oooooooooooh long johnsonCellularSolver: The main idea of a this project - create cellular automation (CA) simulation system. We try to reduce ODE/PDE/Integral Equations models to CA-modelEAWebService: EAWebService is web service that executes parallel evolutionary algorithm. Finite Element Method Samples with C#: Finite Element Method Samples with C# Game Jolt C# Trophy API: The Game Jolt Trophy API provides dotNET developers with access to the Game Jolt services including Trophies, High Scores, Data Storage and many more.GNSystem: GNSystem is a simple (yet, no so elegant) Web-Application which contains a Forum system and a CMS\Blog system. GNSystem is written in ASP.Net MVC 4 using C#Hospital Management System (HMS): HMS is a software basically working to make the hospital management much easier and fasterInfinity - WPF.MVC: Framework for WPF/SL/WinFormsKindle: Kindle PublisherMetroCash: A personal finance management programmetroCIS: metroCIS - Eine open-source Anwendung für Windows8 Verwalte dein Studium an der FH Technikum Wien mit dieser App und erleichter dir damit dein Studentenleben.MTAC: MTAC, for My Tfs Administration Center, is a centralized administration tool for TFSMyStart: Create an Open Source implementation of the Windows Start Menu (based initially on Windows 7), to be used on Windows 8.NLite Data Framework: NLite Linq ORM frameworkPDF Viewer Web part: Here now presenting PDF Viewer web part solution with code. Project91405: dfgfdgfdrProject91407: awqwqProject91407M: 111Purchasesales(??????): a simple Sales Manage Project.QueryOver Specification: A simple implementation of the Specification Pattern using NHibernate QueryOver.Shopping Analytics: Esta aplicacion muestra como aprovechar diversas caracteristicas de la plataforma Windows Phone.simbo: Simbo is a simple, fun app for sharing small notes with friends where many of the concepts in your note can be represented by a symbols.SISLOG: El sistema de logística SISLOG es un software que cual será capaz de automatizar y optimizar los procesos que se llevan a cabo en el área de logística.SQL Server Scripts - A RSSUG CodePlex Project: The SQL Server Scripts project is dedicated to supplying high quality scripts to help with the maintenance and development of SQL Server in every environment.Talqum.League: Talqum.League is a League organisator and statistics app.The Pratoriate Foundation: used for all software dev projects for the non profit Pratoriate Foundation.

    Read the article

  • VMWARE V2V migration without shared storage

    - by TheCleaner
    I would like to do the following: Take Physical box running w2k8r2 and Sql2008R2 and do a P2V on it to a 4.1 Enterprise licensed cluster. --no worries here, I can do that part-- Take the existing physical box that is freed up and install vmware hypervisor 5.0 on it. --again I can do this part-- Do a v2v migration of the VM created in step #1 above from the Enterprise 4.1 Cluster to the standalone host. They are NOT using shared storage. Step #3 is where I'm confused as to what my best option is. I found an article online talking about using Veeam FastSCP and just shutting down the vm on the cluster, removing it from inventory, copying the files over to the new host and then adding it to inventory. Is that the best way to accomplish this?

    Read the article

  • Need advice on framework design: how to make extending easy

    - by beginner_
    I'm creating a framework/library for a rather specific use-case (data type). It uses diverse spring components, including spring-data. The library has a set of entity classes properly set up and according service and dao layers. The main work or main benefit of the framework lies in the dao and service layer. Developers using the framework should be able to extend my entity classes to add additional fields they require. Therefore I made dao and service layer generic so it can be used by such extended entity classes. I now face an issue in the IO part of the framework. It must be able to import the according "special data type" into the database. In this part I need to create a new entity instance and hence need the actual class used. My current solution is to configure in spring a bean of the actual class used. The problem with this is that an application using the framework could only use 1 implementation of the entity (the original one from me or exactly 1 subclass but not 2 different classes of the same hierarchy. I'm looking for suggestions / desgins for solving this issue. Any ideas?

    Read the article

  • What kind of SATA interface is on a Thinkpad X120E?

    - by Jorge Castro
    I recently ordered a Thinkpad X120E with an AMD Fusion (Zacate) chipset. I am eyeballing an SSD for it, however newer SSDs are coming out with 6Gbps SATA interfaces. I doubt such a cheap laptop has 6Gbps SATA, but I'm debating waiting the a bit longer until the Intel 510 series come out, if anything to future proof myself by putting it in this laptop and then later on when I do upgrade to a laptop with 6Gbps SATA I'll be good to go. The hardware manual mentions that the motherboard is for a "AMD Fusion E-350" but the specifications of each hardware part isn't part of the manual. Does anyone have any information on the kind of SATA controllers in Fusion laptops so I can make a better purchasing decision?

    Read the article

  • Exchange IMAP4 connector - Error Event ID 2006

    - by MikeB
    Hi, A couple of users in my organisation use IMAP4 to connect to Exchange 2007 (Update rollup 9 applied) because they prefer Thunderbird / Postbox clients. One of the users is generating errors in the Application Log as follows: An exception Microsoft.Exchange.Data.Storage.ConversionFailedException occurred while converting message Imap4Message 1523, user "*******", folder *********, subject: "******", date: "*******" into MIME format. Microsoft.Exchange.Data.Storage.ConversionFailedException: Message content has become corrupted. ---> System.ArgumentException: Value should be a valid content type in the form 'token/token' Parameter name: value at Microsoft.Exchange.Data.Mime.ContentTypeHeader.set_Value(String value) at Microsoft.Exchange.Data.Storage.MimeStreamWriter.WriteHeader(HeaderId type, String data) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) --- End of inner exception stack trace --- at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeStreamAttachment(StreamAttachmentBase attachment, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeAttachment(MimePartInfo part, MimeFlags flags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimeParts(List`1 parts, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ItemToMimeConverter.WriteMimePart(MimePartInfo part, MimeFlags mimeFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.<>c__DisplayClass2.<WriteMimePart>b__0() at Microsoft.Exchange.Data.Storage.ConvertUtils.CallCts(Trace tracer, String methodName, String exceptionString, CtsCall ctsCall) at Microsoft.Exchange.Data.Storage.ImapItemConverter.WriteMimePart(ItemToMimeConverter converter, MimeStreamWriter writer, OutboundConversionOptions options, MimePartInfo partInfo, MimeFlags conversionFlags) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream) at Microsoft.Exchange.Data.Storage.ImapItemConverter.GetBody(Stream outStream, UInt32[] indices) From my reading around it seems that the suggestion is to ask users to log in to Outlook / OWA and view the messages there. However, having logged in as the users myself, the messages cannot be found either through searching or by browsing the folder detailed in the log entry. The server returns the following error to the client: "The message could not be retrieved using the IMAP4 protocol. The message has not been deleted and may be accessible using either Microsoft Outlook or Microsoft Office Outlook Web Access. You can also try contacting the original sender of the message to find out about the contents of the message. Retrieval of this message will be retried when the server is updated with a fix that addresses the problem." Messages were transferred in to Exchange by copying them from the old Apple Xserve, accessed using IMAP. So my question, finally: 1. Is there any way to get the IMAP Exchange connector to rebuild its cache of messages since it doesn't seem to be pulling them directly from the MAPI store? 2. Alternatively, if there is no database, any ideas on why these messages don't appear in Outlook or OWA would be gratefully received. Many thanks, Mike

    Read the article

  • How can I make myself better at programming working at a shi* job ?

    - by Scrooge
    I recently graduated with an Engineering degree in Computer Science, but my employer (a mid-sized software company) is not using my logical and programming skills. I want to move to a better opportunity but how do I do that since my experience here is not going to count as much? How do I get a better programming job? The worst part is that I am still reading from books (and not writing code myself) even after I have started working. They are paying me a standard entry level Indian IT job rate but I really dont care. It's not worth it. Please advise as to how I can do something worthwhile. (I have studied C++; Core Java; Ruby On Rails ..made a couple of academic projects but no relevant practical real world experience). Just to make things easier .. let me list a few basic queries How I get better at programming without a good project at my company? Please suggest projects where I can learn to write practical code (platform doesn't matter) Best place to take part in open source development? Is it possible that I earn slightly more while I learn? (apart from my sh** job I mean) What kind of practical projects are best suited for me? (ie for an entry level programmer)

    Read the article

  • How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt?

    - by Stu Thompson
    Prelude: I'm a code-monkey that's increasingly taken on SysAdmin duties for my small company. My code is our product, and increasingly we provide the same app as SaaS. About 18 months ago I moved our servers from a premium hosting centric vendor to a barebones rack pusher in a tier IV data center. (Literally across the street.) This ment doing much more ourselves--things like networking, storage and monitoring. As part the big move, to replace our leased direct attached storage from the hosting company, I built a 9TB two-node NAS based on SuperMicro chassises, 3ware RAID cards, Ubuntu 10.04, two dozen SATA disks, DRBD and . It's all lovingly documented in three blog posts: Building up & testing a new 9TB SATA RAID10 NFSv4 NAS: Part I, Part II and Part III. We also setup a Cacit monitoring system. Recently we've been adding more and more data points, like SMART values. I could not have done all this without the awesome boffins at ServerFault. It's been a fun and educational experience. My boss is happy (we saved bucket loads of $$$), our customers are happy (storage costs are down), I'm happy (fun, fun, fun). Until yesterday. Outage & Recovery: Some time after lunch we started getting reports of sluggish performance from our application, an on-demand streaming media CMS. About the same time our Cacti monitoring system sent a blizzard of emails. One of the more telling alerts was a graph of iostat await. Performance became so degraded that Pingdom began sending "server down" notifications. The overall load was moderate, there was not traffic spike. After logging onto the application servers, NFS clients of the NAS, I confirmed that just about everything was experiencing highly intermittent and insanely long IO wait times. And once I hopped onto the primary NAS node itself, the same delays were evident when trying to navigate the problem array's file system. Time to fail over, that went well. Within 20 minuts everything was confirmed to be back up and running perfectly. Post-Mortem: After any and all system failures I perform a post-mortem to determine the cause of the failure. First thing I did was ssh back into the box and start reviewing logs. It was offline, completely. Time for a trip to the data center. Hardware reset, backup an and running. In /var/syslog I found this scary looking entry: Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_00], 6 Currently unreadable (pending) sectors Nov 15 06:49:44 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_07], SMART Prefailure Attribute: 1 Raw_Read_Error_Rate changed from 171 to 170 Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 16 Currently unreadable (pending) sectors Nov 15 06:49:45 umbilo smartd[2827]: Device: /dev/twa0 [3ware_disk_10], 4 Offline uncorrectable sectors Nov 15 06:49:45 umbilo smartd[2827]: Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error Nov 15 06:49:45 umbilo smartd[2827]: # 1 Short offline Completed: read failure 90% 6576 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 2 Short offline Completed: read failure 90% 6087 3421766910 Nov 15 06:49:45 umbilo smartd[2827]: # 3 Short offline Completed: read failure 10% 5901 656821791 Nov 15 06:49:45 umbilo smartd[2827]: # 4 Short offline Completed: read failure 90% 5818 651637856 Nov 15 06:49:45 umbilo smartd[2827]: So I went to check the Cacti graphs for the disks in the array. Here we see that, yes, disk 7 is slipping away just like syslog says it is. But we also see that disk 8's SMART Read Erros are fluctuating. There are no messages about disk 8 in syslog. More interesting is that the fluctuating values for disk 8 directly correlate to the high IO wait times! My interpretation is that: Disk 8 is experiencing an odd hardware fault that results in intermittent long operation times. Somehow this fault condition on the disk is locking up the entire array Maybe there is a more accurate or correct description, but the net result has been that the one disk is impacting the performance of the whole array. The Question(s) How can a single disk in a hardware SATA RAID-10 array bring the entire array to a screeching halt? Am I being naïve to think that the RAID card should have dealt with this? How can I prevent a single misbehaving disk from impacting the entire array? Am I missing something?

    Read the article

  • Python Profiling not installed in Ubuntu? How do I get it in a virtualenv and without apt-get?

    - by interstar
    According to the Python documentation, the "profile" module is part of the standard library. But I can't find it. On my home machine, I was able to add it using apt-get install. (ie. it's split out into a separate ubuntu package.) On my work machine, (also ubuntu) I'm running in a virtualenv, so apt-get install isn't relevant. I can install python modules from pypi using easy-install, but I can't see anything on pypi which corresponds to the profiling module. (Presumably because it's meant to be part of the standard python install.) So how can I install it in this environment?

    Read the article

  • SQL SERVER – FIX ERROR – Cannot connect to . Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452)

    - by pinaldave
    Just a day ago, I was doing small attempting to connect to my local SQL Server using IP 127.0.0.1. The IP is of my local machine and SQL Server is installed on the local box as well. However, whenever I try to connect to the server it gave me following strange error. Cannot connect to 127.0.0.1. Login failed. The login is from an untrusted domain and cannot be used with Windows authentication. (Microsoft SQL Server, Error: 18452) The reason was indeed strange as I was trying to connect from local box to local box and it said my login was from an untrusted domain. As my system is not part of any domain, this was really confusing to me. Another thing was that I have been always able to connect always using 127.0.0.1 to SQL Server and this was a bit strange to me. I started to think what did I change since it  last time I connected to SQL Server. Suddenly I remembered that I had modified my computer’s host file for some other purpose. Solution: I opened my host file and immediately added entry like 127.0.0.1 localhost. Once I added it I was able to reconnect to SQL Server as usual. The location of the host file is C:\Windows\System32\drivers\etc. You will find file with the name hosts in it, make sure to open it with notepad. If you are part of a domain and your organization is using active directory, make sure that your account is added properly to active directory as well have proper security permissions to execute the task. Reference : Pinal Dave (http://blog.SQLAuthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • What is bondib1 used for on SPARC SuperCluster with InfiniBand, Solaris 11 networking & Oracle RAC?

    - by user12620111
    A co-worker asked the following question about a SPARC SuperCluster InfiniBand network: > on the database nodes the RAC nodes communicate over the cluster_interconnect. This is the > 192.168.10.0 network on bondib0. (according to ./crs/install/crsconfig_params NETWORKS> setting) > What is bondib1 used for? Is it a HA counterpart in case bondib0 dies? This is my response: Summary: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic. Details: bondib0 is the cluster_interconnect $ oifcfg getif            bondeth0  10.129.184.0  global  public bondib0  192.168.10.0  global  cluster_interconnect ipmpapp0  192.168.30.0  global  public bondib0 and bondib1 are on 192.168.10.1 and 192.168.10.2 respectively. # ipadm show-addr | grep bondi bondib0/v4static  static   ok           192.168.10.1/24 bondib1/v4static  static   ok           192.168.10.2/24 Hostnames tied to the IPs are node1-priv1 and node1-priv2  # grep 192.168.10 /etc/hosts 192.168.10.1    node1-priv1.us.oracle.com   node1-priv1 192.168.10.2    node1-priv2.us.oracle.com   node1-priv2 For the 4 node RAC interconnect: Each node has 2 private IP address on the 192.168.10.0 network. Each IP address has an active InfiniBand link and a failover InfiniBand link. Thus, the 4 node RAC interconnect is using a total of 8 IP addresses and 16 InfiniBand links. bondib1 isn't being used for the Virtual IP (VIP): $ srvctl config vip -n node1 VIP exists: /node1-ib-vip/192.168.30.25/192.168.30.0/255.255.255.0/ipmpapp0, hosting node node1 VIP exists: /node1-vip/10.55.184.15/10.55.184.0/255.255.255.0/bondeth0, hosting node node1 bondib1 is on bondib1_0 and fails over to bondib1_1: # ipmpstat -g GROUP       GROUPNAME   STATE     FDT       INTERFACES ipmpapp0    ipmpapp0    ok        --        ipmpapp_0 (ipmpapp_1) bondeth0    bondeth0    degraded  --        net2 [net5] bondib1     bondib1     ok        --        bondib1_0 (bondib1_1) bondib0     bondib0     ok        --        bondib0_0 (bondib0_1) bondib1_0 goes over net24 # dladm show-link | grep bond LINK                CLASS     MTU    STATE    OVER bondib0_0           part      65520  up       net21 bondib0_1           part      65520  up       net22 bondib1_0           part      65520  up       net24 bondib1_1           part      65520  up       net23 net24 is IB Partition FFFF # dladm show-ib LINK         HCAGUID         PORTGUID        PORT STATE  PKEYS net24        21280001A1868A  21280001A1868C  2    up     FFFF net22        21280001CEBBDE  21280001CEBBE0  2    up     FFFF,8503 net23        21280001A1868A  21280001A1868B  1    up     FFFF,8503 net21        21280001CEBBDE  21280001CEBBDF  1    up     FFFF On Express Module 9 port 2: # dladm show-phys -L LINK              DEVICE       LOC net21             ibp4         PCI-EM1/PORT1 net22             ibp5         PCI-EM1/PORT2 net23             ibp6         PCI-EM9/PORT1 net24             ibp7         PCI-EM9/PORT2 Outbound traffic on the 192.168.10.0 network will be multiplexed between bondib0 & bondib1 # netstat -rn Routing Table: IPv4   Destination           Gateway           Flags  Ref     Use     Interface -------------------- -------------------- ----- ----- ---------- --------- 192.168.10.0         192.168.10.2         U        16    6551834 bondib1   192.168.10.0         192.168.10.1         U         9    5708924 bondib0   There is a lot more traffic on bondib0 than bondib1 # /bin/time snoop -I bondib0 -c 100 > /dev/null Using device ipnet/bondib0 (promiscuous mode) 100 packets captured real        4.3 user        0.0 sys         0.0 (100 packets in 4.3 seconds = 23.3 pkts/sec) # /bin/time snoop -I bondib1 -c 100 > /dev/null Using device ipnet/bondib1 (promiscuous mode) 100 packets captured real       13.3 user        0.0 sys         0.0 (100 packets in 13.3 seconds = 7.5 pkts/sec) Half of the packets on bondib0 are outbound (from self). The remaining packet are split evenly, from the other nodes in the cluster. # snoop -I bondib0 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib0 (promiscuous mode) 100 packets captured   49 node1-priv1.us.oracle.com   24 node2-priv1.us.oracle.com   14 node3-priv1.us.oracle.com   13 node4-priv1.us.oracle.com 100% of the packets on bondib1 are outbound (from self), but the headers in the packets indicate that they are from the IP address associated with bondib0: # snoop -I bondib1 -c 100 | awk '{print $1}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured  100 node1-priv1.us.oracle.com The destination of the bondib1 outbound packets are split evenly, to node3 and node 4. # snoop -I bondib1 -c 100 | awk '{print $3}' | sort | uniq -c Using device ipnet/bondib1 (promiscuous mode) 100 packets captured   51 node3-priv1.us.oracle.com   49 node4-priv1.us.oracle.com Conclusion: bondib1 is currently only being used for outbound cluster interconnect interconnect traffic.

    Read the article

  • How to change partitioning - may involve conversion of a partition from primary to extended

    - by george_k
    I am having trouble thinking through how I can achieve my partitioning goals. Now my partitions are: sda1 (winA) (primary) sda2 (winB) (primary) sda3 (/ for ubuntu) (primary) What I want to migrate into is (obviously partition numbers need not be exactly like that) sda1 (winA) (primary) sda2 (winB) (primary) sda3 (/boot) (primary) sda4 - extended which will contain sda5 (/home) sda6 (/ for ubuntu) sda7 (swap) I know I may be asking too much, but what would a way to do it? One way I have thought is Create a new primary partition for /boot and split it from the root partition into the new one. It shouldn't be too hard. Then the disk will have 4 primary partitions. Somehow convert the root ubuntu partition from primary to extended. Split that last partition in 3 extended partitions (root, /home, swap) and split the contents there. I am obviously stuck on the 2nd part. Another way could be (maybe easier): Create an extended partition (or two) Split /home there Somehow move everything except /boot to the extended partition. This way /boot will stay on the primary partition that exists now, and will be shrunk as needed, and everything else will end up to the extended partitions. This may sound better, but I'm not too sure how to do the 3rd part. Some details: The disk is almost empty, so I have space to move things around in it, shrink the ubuntu partition etc. I don't want to touch the windows partitions in any way. Reinstallation is not an option. Also using a different partitioning scheme with fewer partitions is not an option (for example not having a separate /boot) Any ideas?

    Read the article

  • split large text file without missing data record

    - by Santosh
    I have a 140 MB text file, which contains detail information of books in library. For each book details there is a standard format data details in text file. I need to parse it and insert the data in Database. Here, parsing text file is not an issue. I am facing problem in parsing this large file. So i decided to split the file in small file around 2 MB each file. But i can't manually split this large file in so many pieces. I got HJsplit tool, which split the file but this also doesn't helped as this split the file but 1 book details half part is in one file and rest part is in second file. so if i split this way then information will be missed. How to split the large so that i cant miss the information ? Is there any tool which help me in this condition.

    Read the article

  • ArchBeat Link-o-Rama for October 18, 2013

    - by OTN ArchBeat
    Enriching XMLType data using relational data – XQuery and fn:collection in action | Lucas Jellema Another detailed technical post from the always prolific Lucas Jellema. Evil Behind ChangeEventPolicy PPR in CRUD ADF 12c and WebLogic Stuck Threads | Andrejus Baranovskis The latest post from Oracle ACE Director Andrejus Baranovskis is a bit of a preview of his presentation at the upcoming UKOUG 2013 event. Podcast: Interview with authors of "Hudson Continuous Integration in Practice" For your listening pleasure... Here's an Oracle Author Podcast Interview with "Hudson Continuous Integration in Practice" authors Ed Burns and Winston Prakash. Manual Recovery Mechanisms in SOA Suite and AIA | Shreenidhi Raghuram Solution architect Shreenidhi Raghuram's post combines information from several sources to provide "a quick reference for Manual Recovery of Faults within the SOA and AIA contexts." Event: Harnessing Oracle Weblogic and Oracle Coherence This OTN Virtual Developer Day event features eight sessions in two tracks, with presentations and hands-on labs for developers and architects delivered by experts in Weblogic, Coherence, and ADF. Registration is free. November 5th, 2013. 9am-1pm PT / 12pm-4pm ET / 1pm-5pm BRT Podcast: IoT Challenges and Opportunities - Part 2 Part 2 of the OTN ArchBeat Internet of Things podcast features a roundtable discussion of IoT challenges: massive data streams, security and privacy issues, evolving standards and protocols. Listen! Video: Design - ADF Architectural Patterns - Two for One Deal | Chris Muir Chris Muir explores the reuse of BTF workspaces across multiple applications and the advantages and disadvantages of reuse at the application level. Thought for the Day "Can't nothing make your life work if you ain't the architect." — Terry McMillan, American author (Born October 18, 1951) Source: brainyquote.com

    Read the article

  • Finding Buried Controls

    - by Bunch
    This post is pretty specific to an issue I had but still has some ideas that could be applied in other scenarios. The problem I had was updating a few buttons so their Text values could be set in the code behind which had a method to grab the proper value from an external source. This was so that if the application needed to be installed by a customer using a language other than English or needed a different notation for the button's Text they could simply update the database. Most of the time this was no big deal. However I had one instance where the button was part of a control, the button had no set ID and that control was only found in a dll. So there was no markup to edit for the Button. Also updating the dll was not an option so I had to make the best of what I had to work with. In the cs file for the aspx file with the control on it I added the Page_LoadComplete. The problem button was within a GridView so I added a foreach to go through each GridViewRow and find the button I needed. Since I did not have an ID to work with besides a random ctl00$main$DllControl$gvStuff$ctl03$ctl05 using the GridView's FindControl was out. I ended up looping through each GridViewRow, then if a RowState equaled Edit loop through the Cells, each control in the Cell and check each control to see if it held a Panel that contained the button. If the control was a Panel I could then loop through the controls in the Panel, find the Button that had text of "Update" (that was the hard coded part) and change it using the method to return the proper value from the database. if (rowState.Contains("Edit")){  foreach (DataControlFieldCell rowCell in gvr.Cells)  {   foreach (Control ctrl in rowCell.Controls)   {    if (ctrl.GetType() == typeof(Panel))     {     foreach (Control childCtrl in ctrl.Controls)     {      if (childCtrl.GetType() == typeof(Button))      {       Button update = (Button)childCtrl;       if (update.Text == "Update")       {        update.Text = method to return the external value for the button's text;       }      }     }    }   }  }} Tags: ASP.Net, CSharp

    Read the article

  • ffmpeg installation error

    - by Thomas
    Now that I"m down to the last part to install the FFMPEG it tells me to do the following cd /usr/local/src/ffmpeg/ ./configure --enable-libmp3lame --enable-libogg --enable-libamr-nb --enable-libamr-wb --enable-libvorbis --disable-mmx --enable-shared make make install ln -s /usr/local/lib/libavformat.so.50 /usr/lib/libavformat.so.50 ln -s /usr/local/lib/libavcodec.so.51 /usr/lib/libavcodec.so.51 ln -s /usr/local/lib/libavutil.so.49 /usr/lib/libavutil.so.49 ln -s /usr/local/lib/libmp3lame.so.0 /usr/lib/libmp3lame.so.0 ln -s /usr/local/lib/libavformat.so.51 /usr/lib/libavformat.so.51 When i get to the part ./configure --enable-libmp3lame --enable-libogg --enable-libamr-nb --enable-libamr-wb --enable-libvorbis --disable-mmx --enable-shared I get the error Unknown option "--enable-libogg". See ./configure --help for available options. I've tried removing the --enable-libogg but does not seem to help.

    Read the article

  • What if you've been asked to develop a site and the client later introduces Ts&Cs that you'll breach whilst doing your job?

    - by Matt Lacey
    Disclaimer : this is all made up. Honest. And it represents no clients or employers living or dead, blah blah blah, etc. [Allegedly] As part of a website I've built, I've now been provided the Terms and Conditions of site usage to display on the site. These terms--which must be agreed to to access the site--include my (or any visitor to the sites) compliance with a number of clauses. Many of these clauses refer to general computer use and are not tied specifically to use of the site. Some of these clauses refer to things I have had to previously do as a legitimate part of my job and would expect to have to do again. When I've raised similar issues previously my line manager has said just to ignore it but that doesn't seem to be the professional thing to do. So, what do I do? Abiding by the terms would mean that I could no longer work on the project and would cause issues with my employer and the owner of the business the site is being created for. Ignoring them could lead to possible future issues with the business owner and is not something I'm necessarily happy with (the deliberate breaking of a legal contract). Neither option is one I'd choose and could have major consequences. Any thoughts?

    Read the article

  • Dealing with bad/incomplete/unclear specifications?

    - by eagerMoose
    I'm working on a project where our dev team gets the specifications from the business part of the company. Both the business management and the IT management require estimates and deadline projections, as they should. The good thing is that estimates are mostly made by the actual developers who get to do the required features. The bad thing is that the specifications are usually either too simple (it turns out you're left with a lot of question marks over your head because a lot of information seems to be missing) or too complex(up to the point that you can't even visualize where everything would "fit" in the app). More often than not, the business part of the specs are either incomplete or unaware of what can and can't be done (given the previously implemented business logic). Dev team is given about a day per new spec to give an estimate and we do try to clear uncertainties, usually by meeting up with whoever did the spec. Most of the times it turns out that spec writers haven't really thought everything through, and it's usually only when we start designing and developing that we end up in trouble, as a lot of the spec seems to have holes. How do you deal with this? Are you generous on estimates in advance?

    Read the article

< Previous Page | 142 143 144 145 146 147 148 149 150 151 152 153  | Next Page >