Search Results

Search found 8611 results on 345 pages for 'apt fast'.

Page 318/345 | < Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >

  • Puppet: Making Windows Awesome Since 2011

    - by Robz / Fervent Coder
    Originally posted on: http://geekswithblogs.net/robz/archive/2014/08/07/puppet-making-windows-awesome-since-2011.aspxPuppet was one of the first configuration management (CM) tools to support Windows, way back in 2011. It has the heaviest investment on Windows infrastructure with 1/3 of the platform client development staff being Windows folks.  It appears that Microsoft believed an end state configuration tool like Puppet was the way forward, so much so that they cloned Puppet’s DSL (domain-specific language) in many ways and are calling it PowerShell DSC. Puppet Labs is pushing the envelope on Windows. Here are several things to note: Puppet x64 Ruby support for Windows coming in v3.7.0. An awesome ACL module (with order, SIDs and very granular control of permissions it is best of any CM). A wealth of modules that work with Windows on the Forge (and more on GitHub). Documentation solely for Windows folks - https://docs.puppetlabs.com/windows. Some of the common learning points with Puppet on Windows user are noted in this recent blog post. Microsoft OpenTech supports Puppet. Azure has the ability to deploy a Puppet Master (http://puppetlabs.com/solutions/microsoft). At Microsoft //Build 2014 in the Day 2 Keynote Puppet Labs CEO Luke Kanies co-presented with Mark Russonivich (http://channel9.msdn.com/Events/Build/2014/KEY02  fast forward to 19:30)! Puppet has a Visual Studio Plugin! It can be overwhelming learning a new tool like Puppet at first, but Puppet Labs has some resources to help you on that path. Take a look at the Learning VM, which has a quest-based learning tool. For real-time questions, feel free to drop onto #puppet on freenode.net (yes, some folks still use IRC) with questions, and #puppet-dev with thoughts/feedback on the language itself. You can subscribe to puppet-users / puppet-dev mailing lists. There is also ask.puppetlabs.com for questions and Server Fault if you want to go to a Stack Exchange site. There are books written on learning Puppet. There are even Puppet User Groups (PUGs) and other community resources! Puppet does take some time to learn, but with anything you need to learn, you need to weigh the benefits versus the ramp up time. I learned NHibernate once, it had a very high ramp time back then but was the only game on the street. Puppet’s ramp up time is considerably less than that. The advantage is that you are learning a DSL, and it can apply to multiple platforms (Linux, Windows, OS X, etc.) with the same Puppet resource constructs. As you learn Puppet you may wonder why it has a DSL instead of just leveraging the language of Ruby (or maybe this is one of those things that keeps you up wondering at night). I like the DSL over a small layer on top of Ruby. It allows the Puppet language to be portable and go more places. It makes you think about the end state of what you want to achieve in a declarative sense instead of in an imperative sense. You may also find that right now Puppet doesn’t run manifests (scripts) in order of the way resources are specified. This is the number one learning point for most folks. As a long time consternation of some folks about Puppet, manifest ordering was not possible in the past. In fact it might be why some other CMs exist! As of 3.3.0, Puppet can do manifest ordering, and it will be the default in Puppet 4. http://puppetlabs.com/blog/introducing-manifest-ordered-resources You may have caught earlier that I mentioned PowerShell DSC. But what about DSC? Shouldn’t that be what Windows users want to choose? Other CMs are integrating with DSC, will Puppet follow suit and integrate with DSC? The biggest concern that I have with DSC is it’s lack of visibility in fine-grained reporting of changes (which Puppet has). The other is that it is a very young Microsoft product (pre version 3, you know what they say :) ). I tried getting it working in December and ran into some issues. I’m hoping that newer releases are there that actually work, it does have some promising capabilities, it just doesn’t quite come up to the standard of something that should be used in production. In contrast Puppet is almost a ten year old language with an active community! It’s very stable, and when trusting your business to configuration management, you want something that has been around awhile and has been proven. Give DSC another couple of releases and you might see more folks integrating with it. That said there may be a future with DSC integration. Portability and fine-grained reporting of configuration changes are reasons to take a closer look at Puppet on Windows. Yes, Puppet on Windows is here to stay and it’s continually getting better folks.

    Read the article

  • MySQL Connect Only 10 Days Away - Focus on InnoDB Sessions

    - by Bertrand Matthelié
    Time flies and MySQL Connect is only 10 days away! You can check out the full program here as well as in the September edition of the MySQL newsletter. Mat recently blogged about the MySQL Cluster sessions you’ll have the opportunity to attend, and below are those focused on InnoDB. Remember you can plan your schedule with Schedule Builder. Saturday, 1.00 pm, Room Golden Gate 3: 10 Things You Should Know About InnoDB—Calvin Sun, Oracle InnoDB is the default storage engine for Oracle’s MySQL as of MySQL Release 5.5. It provides the standard ACID-compliant transactions, row-level locking, multiversion concurrency control, and referential integrity. InnoDB also implements several innovative technologies to improve its performance and reliability. This presentation gives a brief history of InnoDB; its main features; and some recent enhancements for better performance, scalability, and availability. Saturday, 5.30 pm, Room Golden Gate 4: Demystified MySQL/InnoDB Performance Tuning—Dimitri Kravtchuk, Oracle This session covers performance tuning with MySQL and the InnoDB storage engine for MySQL and explains the main improvements made in MySQL Release 5.5 and Release 5.6. Which setting for which workload? Which value will be better for my system? How can I avoid potential bottlenecks from the beginning? Do I need a purge thread? Is it true that InnoDB doesn't need thread concurrency anymore? These and many other questions are asked by DBAs and developers. Things are changing quickly and constantly, and there is no “silver bullet.” But understanding the configuration setting’s impact is already a huge step in performance improvement. Bring your ideas and problems to share them with others—the discussion is open, just moderated by a speaker. Sunday, 10.15 am, Room Golden Gate 4: Better Availability with InnoDB Online Operations—Calvin Sun, Oracle Many top Web properties rely on Oracle’s MySQL as a critical piece of infrastructure for serving millions of users. Database availability has become increasingly important. One way to enhance availability is to give users full access to the database during data definition language (DDL) operations. The online DDL operations in recent MySQL releases offer users the flexibility to perform schema changes while having full access to the database—that is, with minimal delay of operations on a table and without rebuilding the entire table. These enhancements provide better responsiveness and availability in busy production environments. This session covers these improvements in the InnoDB storage engine for MySQL for online DDL operations such as add index, drop foreign key, and rename column. Sunday, 11.45 am, Room Golden Gate 7: Developing High-Throughput Services with NoSQL APIs to InnoDB and MySQL Cluster—Andrew Morgan and John Duncan, Oracle Ever-increasing performance demands of Web-based services have generated significant interest in providing NoSQL access methods to MySQL (MySQL Cluster and the InnoDB storage engine of MySQL), enabling users to maintain all the advantages of their existing relational databases while providing blazing-fast performance for simple queries. Get the best of both worlds: persistence; consistency; rich SQL queries; high availability; scalability; and simple, flexible APIs and schemas for agile development. This session describes the memcached connectors and examines some use cases for how MySQL and memcached fit together in application architectures. It does the same for the newest MySQL Cluster native connector, an easy-to-use, fully asynchronous connector for Node.js. Sunday, 1.15 pm, Room Golden Gate 4: InnoDB Performance Tuning—Inaam Rana, Oracle The InnoDB storage engine has always been highly efficient and includes many unique architectural elements to ensure high performance and scalability. In MySQL 5.5 and MySQL 5.6, InnoDB includes many new features that take better advantage of recent advances in operating systems and hardware platforms than previous releases did. This session describes unique InnoDB architectural elements for performance, new features, and how to tune InnoDB to achieve better performance. Sunday, 4.15 pm, Room Golden Gate 3: InnoDB Compression for OLTP—Nizameddin Ordulu, Facebook and Inaam Rana, Oracle Data compression is an important capability of the InnoDB storage engine for Oracle’s MySQL. Compressed tables reduce the size of the database on disk, resulting in fewer reads and writes and better throughput by reducing the I/O workload. Facebook pushes the limit of InnoDB compression and has made several enhancements to InnoDB, making this technology ready for online transaction processing (OLTP). In this session, you will learn the fundamentals of InnoDB compression. You will also learn the enhancements the Facebook team has made to improve InnoDB compression, such as reducing compression failures, not logging compressed page images, and allowing changes of compression level. Not registered yet? You can still save US$ 300 over the on-site fee – Register Now!

    Read the article

  • Sea Monkey Sales & Marketing, and what does that have to do with ERP?

    - by user709270
    Tier One Defined By Lyle Ekdahl, Oracle JD Edwards Group Vice President and General Manager  I recently became aware of the latest Sea Monkey Sales & Marketing tactic. Wait now, what is Sea Monkey Sales & Marketing and what does that have to do with ERP? Well if you grew up in USA during the 50’s, 60’s and maybe a bit in the early 70’s there was a unifying media of culture known as the comic book. I was a big Iron Man fan. I always liked the troubled hero aspect of Tony Start and hey he was a technologist. This is going somewhere, just hold on. Of course comic books like most media contained advertisements. Ninety pound weakling transformed by Charles Atlas in just 15 minutes per day. Baby Ruth, Juicy Fruit Gum and all assortments of Hostess goodies were on display. The best ad was for the “Amazing Live Sea-Monkeys – The real live fun-pets you grow yourself!” These ads set the standard for exaggeration and half-truth; “…they love attention…so eager to please, they can even be trained…” The cartoon picture on the ad is of a family of royal looking sea creatures – daddy, mommy, son and little sis – sea monkey? There was a disclaimer at the bottom in fine print, “Caricatures shown not intended to depict Artemia.” Ok what ten years old knows what the heck artemia is? Well you grow up fast once you’ve been separated from your buck twenty five plus postage just to discover that it is brine shrimp. Really dumb brine shrimp that don’t take commands or do tricks. Unfortunately the technology industry is full of sea monkey sales and marketing. Yes believe it or not in some cases there is subterfuge and obfuscation used to secure contracts. Hey I get it; the picture on the box might not be the actual size. Make up what you want about your product, but here is what I don’t like, could you leave out the obvious falsity when it comes to my product, especially the negative stuff. So here is the latest one – “Oracle’s JD Edwards is NOT tier one”. Really? Definition please! Well a whole host of googleable and reputable sources confirm that a tier one vendor is large, well known, and enjoys national and international recognition. Let me see large, so thousands of customers? Oh and part of the world’s largest business software and hardware corporation? Check and check JD Edwards has that and that. Well known, enjoying national and international recognition? Oracle’s JD Edwards EnterpriseOne is available in 21 languages and is directly localized in 33 countries that support some of the world’s largest multinationals and many midsized domestic market companies. Something on the order of half the JD Edwards customer base is outside North America. My passport is on its third insert after 2 years and not from vacations. So if you don’t mind I am going to mark national and international recognition in the got it column. So what else is there? Well let me offer a few criteria. Longevity – The JD Edwards products benefit from 35+ years of intellectual property development; through booms, busts, mergers and acquisitions, we are still here Vision & innovation – JD Edwards is the first full suite ERP to run on the iPad as just one example Proven track record of execution – Since becoming part of Oracle, JD Edwards has released to the market over 20 deliverables including major release, point releases, new apps modules, tool releases, integrations…. Solid, focused functionality with a flexible, interoperable, extensible underlying architecture – JD Edwards offers solid core ERP with specialty modules for verticals all delivered on a well defined independent tools layer that helps enable you to scale your business without an ERP reimplementation A continuation plan – Oracle’s JD Edwards offers our customers a 6 year roadmap as well as interoperability with Oracle’s next generation of applications Oh I almost forgot that the expert sources agree on one additional thing, tier one may be a preferred vendor that offers product and services to you with appealing value. You should check out the TCO studies of JD Edwards. I think you will see what the thousands of customers that rely on these products to run their businesses enjoy – that is the tier one solution with the lowest TCO. Oh and if you get an offer to buy an ERP for no license charge, remember the picture on the box might not be the actual size. 

    Read the article

  • What should you bring to the table as a Software Architect?

    - by Ahmad Mageed
    There have been many questions with good answers about the role of a Software Architect (SA) on StackOverflow and Programmers SE. I am trying to ask a slightly more focused question than those. The very definition of a SA is broad so for the sake of this question let's define a SA as follows: A Software Architect guides the overall design of a project, gets involved with coding efforts, conducts code reviews, and selects the technologies to be used. In other words, I am not talking about managerial rest and vest at the crest (further rhyming words elided) types of SAs. If I were to pursue any type of SA position I don't want to be away from coding. I might sacrifice some time to interface with clients and Business Analysts etc., but I am still technically involved and I'm not just aware of what's going on through meetings. With these points in mind, what should a SA bring to the table? Should they come in with a mentality of "laying down the law" (so to speak) and enforcing the usage of certain tools to fit "their way," i.e., coding guidelines, source control, patterns, UML documentation, etc.? Or should they specify initial direction and strategy then be laid back and jump in as needed to correct the ship's direction? Depending on the organization this might not work. An SA who relies on TFS to enforce everything may struggle to implement their plan at an employer that only uses StarTeam. Similarly, an SA needs to be flexible depending on the stage of the project. If it's a fresh project they have more choices, whereas they might have less for existing projects. Here are some SA stories I have experienced as a way of sharing some background in hopes that answers to my questions might also shed some light on these issues: I've worked with an SA who code reviewed literally every single line of code of the team. The SA would do this for not just our project but other projects in the organization (imagine the time spent on this). At first it was useful to enforce certain standards, but later it became crippling. FxCop was how the SA would find issues. Don't get me wrong, it was a good way to teach junior developers and force them to think of the consequences of their chosen approach, but for senior developers it was seen as somewhat draconian. One particular SA was against the use of a certain library, claiming it was slow. This forced us to write tons of code to achieve things differently while the other library would've saved us a lot of time. Fast forward to the last month of the project and the clients were complaining about performance. The only solution was to change certain functionality to use the originally ignored approach despite early warnings from the devs. By that point a lot of code was thrown out and not reusable, leading to overtime and stress. Sadly the estimates used for the project were based on the old approach which my project was forbidden from using so it wasn't an appropriate indicator for estimation. I would hear the PM say "we've done this before," when in reality they had not since we were using a new library and the devs working on it were not the same devs used on the old project. The SA who would enforce the usage of DTOs, DOs, BOs, Service layers and so on for all projects. New devs had to learn this architecture and the SA adamantly enforced usage guidelines. Exceptions to usage guidelines were made when it was absolutely difficult to follow the guidelines. The SA was grounded in their approach. Classes for DTOs and all CRUD operations were generated via CodeSmith and database schemas were another similar ball of wax. However, having used this setup everywhere, the SA was not open to new technologies such as LINQ to SQL or Entity Framework. I am not using this post as a platform for venting. There were positive and negative aspects to my experiences with the SA stories mentioned above. My questions boil down to: What should an SA bring to the table? How can they strike a balance in their decision making? Should one approach an SA job (as defined earlier) with the mentality that they must enforce certain ground rules? Anything else to consider? Thanks! I'm sure these job tasks are easily extended to people who are senior devs or technical leads, so feel free to answer at that capacity as well.

    Read the article

  • ASSIMP in my program is much slower to import than ASSIMP view program

    - by Marco
    The problem is really simple: if I try to load with the function aiImportFileExWithProperties a big model in my software (around 200.000 vertices), it takes more than one minute. If I try to load the very same model with ASSIMP view, it takes 2 seconds. For this comparison, both my software and Assimp view are using the dll version of the library at 64 bit, compiled by myself (Assimp64.dll). This is the relevant piece of code in my software // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; cout << "Loading " << pFile << "... "; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,80.f); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE, aiPrimitiveType_LINE | aiPrimitiveType_POINT); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file scene = (aiScene*)aiImportFileExWithProperties(pFile.c_str(), ppsteps | /* default pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges //aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); if(!scene){ cout << aiGetErrorString() << endl; return 0; } this is the relevant piece of code in assimp view code // default pp steps unsigned int ppsteps = aiProcess_CalcTangentSpace | // calculate tangents and bitangents if possible aiProcess_JoinIdenticalVertices | // join identical vertices/ optimize indexing aiProcess_ValidateDataStructure | // perform a full validation of the loader's output aiProcess_ImproveCacheLocality | // improve the cache locality of the output vertices aiProcess_RemoveRedundantMaterials | // remove redundant materials aiProcess_FindDegenerates | // remove degenerated polygons from the import aiProcess_FindInvalidData | // detect invalid model data, such as invalid normal vectors aiProcess_GenUVCoords | // convert spherical, cylindrical, box and planar mapping to proper UVs aiProcess_TransformUVCoords | // preprocess UV transformations (scaling, translation ...) aiProcess_FindInstances | // search for instanced meshes and remove them by references to one master aiProcess_LimitBoneWeights | // limit bone weights to 4 per vertex aiProcess_OptimizeMeshes | // join small meshes, if possible; aiProcess_SplitByBoneCount | // split meshes with too many bones. Necessary for our (limited) hardware skinning shader 0; aiPropertyStore* props = aiCreatePropertyStore(); aiSetImportPropertyInteger(props,AI_CONFIG_IMPORT_TER_MAKE_UVS,1); aiSetImportPropertyFloat(props,AI_CONFIG_PP_GSN_MAX_SMOOTHING_ANGLE,g_smoothAngle); aiSetImportPropertyInteger(props,AI_CONFIG_PP_SBP_REMOVE,nopointslines ? aiPrimitiveType_LINE | aiPrimitiveType_POINT : 0 ); aiSetImportPropertyInteger(props,AI_CONFIG_GLOB_MEASURE_TIME,1); //aiSetImportPropertyInteger(props,AI_CONFIG_PP_PTV_KEEP_HIERARCHY,1); // Call ASSIMPs C-API to load the file g_pcAsset->pcScene = (aiScene*)aiImportFileExWithProperties(g_szFileName, ppsteps | /* configurable pp steps */ aiProcess_GenSmoothNormals | // generate smooth normal vectors if not existing aiProcess_SplitLargeMeshes | // split large, unrenderable meshes into submeshes aiProcess_Triangulate | // triangulate polygons with more than 3 edges aiProcess_ConvertToLeftHanded | // convert everything to D3D left handed space aiProcess_SortByPType | // make 'clean' meshes which consist of a single typ of primitives 0, NULL, props); aiReleasePropertyStore(props); As you can see the code is nearly identical because I copied from assimp view. What could be the reason for such a difference in performance? The two software are using the same dll Assimp64.dll (compiled in my computer with vc++ 2010 express) and the same function aiImportFileExWithProperties to load the model, so I assume that the actual code employed is the same. How is it possible that the function aiImportFileExWithProperties is 100 times slower when called by my sotware than when called by assimp view? What am I missing? I am not good with dll, dynamic and static libraries so I might be missing something obvious. ------------------------------ UPDATE I found out the reason why the code is going slower. Basically I was running my software with "Start debugging" in VC++ 2010 Express. If I run the code outside VC++ 2010 I get same performance of assimp view. However now I have a new question. Why does the dll perform slower in VC++ debugging? I compiled it in release mode without debugging information. Is there any way to have the dll go fast in debugmode i.e. not debugging the dll? Because I am interested in debugging only my own code, not the dll that I assume is already working fine. I do not want to wait 2 minutes every time I want to load my software to debug. Does this request make sense?

    Read the article

  • SQL SERVER – Weekly Series – Memory Lane – #006

    - by pinaldave
    Here is the list of selected articles of SQLAuthority.com across all these years. Instead of just listing all the articles I have selected a few of my most favorite articles and have listed them here with additional notes below it. Let me know which one of the following is your favorite article from memory lane. 2006 This was my very first year of blogging so I was every day learning something new. As I have said many times, that blogging was never an intention. I had really not understood what exactly I am working on or beginning when I was beginning blogging in 2006. I had never knew that my life was going to change forever, once I started blogging. When I look back all of this year, I am happy that we are here together. 2007 IT Outsourcing to India – Top 10 Reasons Companies Outsource Outsourcing is about trust, collaboration and success. Helping other countries in need has been always the course of mankind, outsourcing is nothing different then that. With information technology and process improvements increasing the complexity, costs and skills required to accomplish routine tasks as well as challenging complex tasks, companies are outsourcing such tasks to providers who have the expertise to perform them at lower costs , with greater value and quality outcome. UDF – Remove Duplicate Chars From String This was a very interesting function I wrote in my early career. I am still using this function when I have to remove duplicate chars from strings. I have yet to come across a scenario where it does not work so I keep on using it till today. Please leave a comment if there is any better solution to this problem. FIX : Error : 3702 Cannot drop database because it is currently in use This is a very generic error when DROP Database is command is executed and the database is not dropped. The common mistake user is kept the connection open with this database and trying to drop the database. The database cannot be dropped if there is any other connection open along with it. It is always a good idea to take database in single user mode before dropping it. Here is the quick tutorial regarding how to bring the database in single user mode: Using T-SQL | Using SSMS. 2008 Install SQL Server 2008 – How to Upgrade to SQL Server 2008 – Installation Tutorial This was indeed one of the most popular articles in SQL Server 2008. Lots of people wanted to learn how to install SQL SErver 2008 but they were facing various issues while installation. I build this tutorial which becomes reference points for many. Default Collation of SQL Server 2008 What is the collation of SQL Server 2008 default installations? I often see this question confusing many experienced developers as well. Well the answer is in following image. Ahmedabad SQL Server User Group Meeting – November 2008 User group meetings are fun, now a days I am going to User Group meetings every week but there was a case when I have been just a beginner on this subject. The bug of the community was caught on me years ago when I started to present in Ahmedabad and Gandhinagar SQ LServer User Groups. 2009 Validate an XML document in TSQL using XSD My friend Jacob Sebastian wrote an excellent article on the subject XML and XSD. Because of the ‘eXtensible’ nature of XML (eXtensible Markup Language), often there is a requirement to restrict and validate the content of an XML document to a pre-defined structure and values. XSD (XML Schema Definition Language) is the W3C recommended language for describing and validating XML documents. SQL Server implements XSD as XML Schema Collections. Star Join Query Optimization At present, when queries are sent to very large databases, millions of rows are returned. Also the users have to go through extended query response times when joining multiple tables are involved with such queries. ‘Star Join Query Optimization’ is a new feature of SQL Server 2008 Enterprise Edition. This mechanism uses bitmap filtering for improving the performance of some types of queries by the effective retrieval of rows from fact tables. 2010 These puzzles are very interesting and intriguing – there was lots of interest on this subject. If you have free time this weekend. You may want to try them out. SQL SERVER – Challenge – Puzzle – Usage of FAST Hint (Solution)  SQL SERVER – Puzzle – Challenge – Error While Converting Money to Decimal (Solution)  SQL SERVER – Challenge – Puzzle – Why does RIGHT JOIN Exists (Open)  Additionally, I had great fun presenting SQL Server Performance Tuning seminar at fantastic locations in Hyderabad. Installing AdventeWorks Database This has been the most popular request I have received on my blog. Here is the quick video about how one can install AdventureWorks. 2011 Effect of SET NOCOUNT on @@ROWCOUNT There was an interesting incident once while I was presenting a session. I wrote a code and suddenly 10 hands went up in the air.  This was a bit surprise to me as I do not know why they all got alerted. I assumed that there should be something wrong with either project, screen or my display. However the real reason was very interesting – I suggest you read the complete blog post to understand this interesting scenario. Error: Deleting Offline Database and Creating the Same Name This is very interesting because once a user deletes the offline database the MDF and LDF file still exists and if the user attempts to create a new database with the same name it will give error. I found this very interesting and the blog explains the concept very quickly. Have you ever faced a similar situation? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Memory Lane, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • New Replication, Optimizer and High Availability features in MySQL 5.6.5!

    - by Rob Young
    As the Product Manager for the MySQL database it is always great to announce when the MySQL Engineering team delivers another great product release.  As a field DBA and developer it is even better when that release contains improvements and innovation that I know will help those currently using MySQL for apps that range from modest intranet sites to the most highly trafficked web sites on the web.  That said, it is my pleasure to take my hat off to MySQL Engineering for today's release of the MySQL 5.6.5 Development Milestone Release ("DMR"). The new highlighted features in MySQL 5.6.5 are discussed here: New Self-Healing Replication ClustersThe 5.6.5 DMR improves MySQL Replication by adding Global Transaction Ids and automated utilities for self-healing Replication clusters.  Prior to 5.6.5 this has been somewhat of a pain point for MySQL users with most developing custom solutions or looking to costly, complex third-party solutions for these capabilities.  With 5.6.5 these shackles are all but removed by a solution that is included with the GPL version of the database and supporting GPL tools.  You can learn all about the details of the great, problem solving Replication features in MySQL 5.6 in Mat Keep's Developer Zone article.  New Replication Administration and Failover UtilitiesAs mentioned above, the new Replication features, Global Transaction Ids specifically, are now supported by a set of automated GPL utilities that leverage the new GTIDs to provide administration and manual or auto failover to the most up to date slave (that is the default, but user configurable if needed) in the event of a master failure. The new utilities, along with links to Engineering related blogs, are discussed in detail in the DevZone Article noted above. Better Query Optimization and ThroughputThe MySQL Optimizer team continues to amaze with the latest round of improvements in 5.6.5. Along with much refactoring of the legacy code base, the Optimizer team has improved complex query optimization and throughput by adding these functional improvements: Subquery Optimizations - Subqueries are now included in the Optimizer path for runtime optimization.  Better throughput of nested queries enables application developers to simplify and consolidate multiple queries and result sets into a single unit or work. Optimizer now uses CURRENT_TIMESTAMP as default for DATETIME columns - For simplification, this eliminates the need for application developers to assign this value when a column of this type is blank by default. Optimizations for Range based queries - Optimizer now uses ready statistics vs Index based scans for queries with multiple range values. Optimizations for queries using filesort and ORDER BY.  Optimization criteria/decision on execution method is done now at optimization vs parsing stage. Print EXPLAIN in JSON format for hierarchical readability and Enterprise tool consumption. You can learn the details about these new features as well all of the Optimizer based improvements in MySQL 5.6 by following the Optimizer team blog. You can download and try the MySQL 5.6.5 DMR here. (look under "Development Releases")  Please let us know what you think!  The new HA utilities for Replication Administration and Failover are available as part of the MySQL Workbench Community Edition, which you can download here .Also New in MySQL LabsAs has become our tradition when announcing DMRs we also like to provide "Early Access" development features to the MySQL Community via the MySQL Labs.  Today is no exception as we are also releasing the following to Labs for you to download, try and let us know your thoughts on where we need to improve:InnoDB Online OperationsMySQL 5.6 now provides Online ADD Index, FK Drop and Online Column RENAME.  These operations are non-blocking and will continue to evolve in future DMRs.  You can learn the grainy details by following John Russell's blog.InnoDB data access via Memcached API ("NotOnlySQL") - Improved refresh of an earlier feature releaseSimilar to Cluster 7.2, MySQL 5.6 provides direct NotOnlySQL access to InnoDB data via the familiar Memcached API. This provides the ultimate in flexibility for developers who need fast, simple key/value access and complex query support commingled within their applications.Improved Transactional Performance, ScaleThe InnoDB Engineering team has once again under promised and over delivered in the area of improved performance and scale.  These improvements are also included in the aggregated Spring 2012 labs release:InnoDB CPU cache performance improvements for modern, multi-core/CPU systems show great promise with internal tests showing:    2x throughput improvement for read only activity 6x throughput improvement for SELECT range Read/Write benchmarks are in progress More details on the above are available here. You can download all of the above in an aggregated "InnoDB 2012 Spring Labs Release" binary from the MySQL Labs. You can also learn more about these improvements and about related fixes to mysys mutex and hash sort by checking out the InnoDB team blog.MySQL 5.6.5 is another installment in what we believe will be the best release of the MySQL database ever.  It also serves as a shining example of how the MySQL Engineering team at Oracle leads in MySQL innovation.You can get the overall Oracle message on the MySQL 5.6.5 DMR and Early Access labs features here. As always, thanks for your continued support of MySQL, the #1 open source database on the planet!

    Read the article

  • Surface and the Uphill Battle to Win Over iPad Users (Namely: Me)

    - by D'Arcy Lussier
    I went away this past weekend and decided to bring along the Windows 8 tablet from the Build conference last year – y’know, to give Windows 8 a try in a typical scenario. I also brought our iPad 2 along since I figured my wife would want to use that. I’d love to tell you how I found using my Windows 8 tablet but I can’t – I used the iPad exclusively the entire weekend. It was during this that I realized what Microsoft needs to do to win me over as an iPad user. As you’ll see, I’m left wondering what it is that Surface is meant to compete with: iPad and other tablets, or thin laptops like the MacBook Air or Ultrabooks. Device Size I really like the size of the iPad compared with the Build tablet. It’s not as long and the thinness/weight of the device makes it feel more like you’re holding a magazine than a computer. I’m pleased that Microsoft will be matching the thinness of the iPad with Surface, but I’m suspect as to what that actually means. The iPad’s edges slant inwards where the Surface has a thicker boxish look (similar to the iPhone 4S). So while they may have the same depth at the deepest part of both devices, I bet the iPad will come off feeling thinner. However, its not lost on me the number of external port options the Surface’s design provides over the iPad (Usb, etc.). With that said, I haven’t missed having a USB slot on my iPad. I’m not a fan of lengthening the Surface screen size to almost a full inch over the iPad, mainly because… Vertical Orientation Experience Did you notice at the announce event, in the images of the devices that have been released, and in any marketing for it, that the surface is always displayed in horizontal orientation. This is a huge beef I have with my Build tablet and why I prefer the iPad. Yes the iPad can do the wide-screenish mode, but the iPad is oriented to be vertical by nature. Don’t agree? Look at the button and camera placement – both on the shorter sides of the device. Compare that with the Surface, where the orientation for the button and camera is on the longer sides. To be fair, Blackberry and the horde of Android tablets out there haven’t gotten this either – since most monitors are widescreen nowadays tablets should be too right? Wrong. Widescreen is great for certain things, but tasks such as reading is not one of them – hence why monitor companies like Dell provide stands that allow you to flip your widescreen monitor to a vertical orientation. That Microsoft has chosen a horizontal orientation by default for Windows 8 is disappointing – hopefully hardware manufacturers will be given the option of a default vertical orientation. Fast Startup Time I like that I can turn off/turn on the iPad very quickly. Even from a true “off” mode and not just sleeping, the iPad boots up very quickly. Windows RT needs to have that same quick response. If I start finding that I’m waiting for the device to boot up for more than 30 seconds that could be a show stopper. No Heat I really hate that the Build tablet has fans that kick in to cool the procs, but its basically a slate computer and I get its part of that prototype build. For Surface, it needs to be the same type of experience as the iPad – no heat! I know Surface doesn’t have fans and uses some cool new vent system or something like that, but even then – I want to sit and read a book on my Surface without having to feel any heat coming from the device, which is the experience I have with the iPad now. What About Apps?! I am definitely not the target client when it comes to app stores. On my iPad I use: Safari Kindle Reader Twitter App Settlers of Catan TSN’s App And that’s it. So really, while its nice that some version of Office might be available, I’m not planning on utilizing a Surface for creating a PowerPoint or working on a Word document – that’s what my laptop is for. I want my tablet to be for information snacking or as an e-reader and occasionally an entertainment device. Surface vs iPad or Surface vs Air? The more that I read up on Surface, the more I wonder if it won’t be a touch-enabled MacBook Air competitor more than an iPad one. Also, I really question if Microsoft gets tablets – when one of your main selling features is a built-in physical keyboard it speaks more to a traditional laptop experience than a tablet one that’s entirely reliant on touch. Still, I really love the Windows Phone interface – way more than iOS – so I’m still very optimistic that the Metro experience on the tablet will be fantastic. I just worry that Microsoft has interpreted a tablet as a computer with a removable keyboard and a touch screen, and that’s not what tablet computing is about at all.

    Read the article

  • Amanda Todd&ndash;What Parents Can Learn From Her Story

    - by D'Arcy Lussier
    Amanda Todd was a bullied teenager who committed suicide this week. Her story has become headline news due in part to her You Tube video she posted telling her story:   The story is heartbreaking for so many reasons, but I wanted to talk about what we as parents can learn from this. Being the dad to two girls, one that’s 10, I’m very aware of the dangers that the internet holds. When I saw her story, one thing jumped out at me – unmonitored internet access at an early age. My daughter (then 9) came home from a friends place once and asked if she could be in a YouTube video with her friend. Apparently this friend was allowed to do whatever she wanted on the internet, including posting goofy videos. This set off warning bells and we ensured our daughter realized the dangers and that she was not to ever post videos of herself. In looking at Amanda’s story, the access to unmonitored internet time along with just being a young girl and being flattered by an online predator were the key events that ultimately led to her suicide. Yes, the reaction of her classmates and “friends” was horrible as well, I’m not diluting that. But our youth don’t fully understand yet that what they do on the internet today will follow them potentially forever. And the people they meet online aren’t necessarily who they claim to be. So what can we as parents learn from Amanda’s story? Parents Shouldn’t Feel Bad About Being Internet Police Our job as parents is in part to protect our kids and keep them safe, even if they don’t like our measures. This includes monitoring, supervising, and restricting their internet activities. In our house we have a family computer in the living room that the kids can watch videos and surf the web. It’s in plain view of everyone, so you can’t hide what you’re looking at. If our daughter goes to a friend’s place, we ask about what they did and what they played. If the computer comes up, we ask about what they did on it. Luckily our daughter is very up front and honest in telling us things, so we have very open discussions. Parents Need to Be Honest About the Dangers of the Internet I’m sure every generation says that “kids grow up so fast these days”, but in our case the internet really does push our kids to be exposed to things they otherwise wouldn’t experience. One wrong word in a Google search, a click of a link in a spam email, or just general curiosity can expose a child to things they aren’t ready for or should never be exposed to (and I’m not just talking about adult material – have you seen some of the graphic pictures from war zones posted on news sites recently?). Our stance as parents has been to be open about discussing the dangers with our kids before they encounter any content – be proactive instead of reactionary. Part of this is alerting them to the monsters that lurk on the internet as well. As kids explore the world wide web, they’re eventually going to encounter some chat room or some Facebook friend invite or other personal connection with someone. More than ever kids need to be educated on the dangers of engaging with people online and sharing personal information. You can think of it as an evolved discussion that our parents had with us about using the phone: “Don’t say ‘I’m home alone’, don’t say when mom or dad get home, don’t tell them any information, etc.” Parents Need to Talk Self Worth at Home Katie makes the point better than I ever could (one bad word towards the end): Our children need to understand their value beyond what the latest issue of TigerBeat says, or the media who continues flaunting physical attributes over intelligence and character, or a society that puts focus on status and wealth. They also have to realize that just because someone pays you a compliment, that doesn’t mean you should ignore personal boundaries and limits. What does this have to do with the internet? Well, in days past if you wanted to be social you had to go out somewhere. Now you can video chat with any number of people from the comfort of wherever your laptop happens to be – and not just text but full HD video with sound! While innocent children head online in the hopes of meeting cool people, predators with bad intentions are heading online too. As much as we try to monitor their online activity and be honest about the dangers of the internet, the human side of our kids isn’t something we can control. But we can try to influence them to see themselves as not needing to search out the acceptance of complete strangers online. Way easier said than done, but ensuring self-worth is something discussed, encouraged, and celebrated is a step in the right direction. Parental Wake Up Call This post is not a critique of Amanda’s parents. The reality is that cyber bullying/abuse is happening every day, and there are millions of parents that have no clue its happening to their children. Amanda’s story is a wake up call that our children’s online activities may be putting them in danger. My heart goes out to the parents of this girl. As a father of daughters, I can’t imagine what I would do if I found my daughter having to hide in a ditch to avoid a mob or call 911 to report my daughter had attempted suicide by drinking bleach or deal with a child turning to drugs/alcohol/cutting to cope. It would be horrendous if we as parents didn’t re-evaluate our family internet policies in light of this event. And in the end, Amanda’s video was meant to bring attention to her plight and encourage others going through the same thing. We may not be kids, but we can still honour her memory by helping safeguard our children.

    Read the article

  • My Right-to-Left Foot (T-SQL Tuesday #13)

    - by smisner
    As a business intelligence consultant, I often encounter the situation described in this month's T-SQL Tuesday, hosted by Steve Jones ( Blog | Twitter) – “What the Business Says Is Not What the  Business Wants.” Steve posed the question, “What issues have you had in interacting with the business to get your job done?” My profession requires me to have one foot firmly planted in the technology world and the other foot planted in the business world. I learned long ago that the business never says exactly what the business wants because the business doesn't have the words to describe what the business wants accurately enough for IT. Not only do technological-savvy barriers exist, but there are also linguistic barriers between the two worlds. So how do I cope? The adage "a picture is worth a thousand words" is particularly helpful when I'm called in to help design a new business intelligence solution. Many of my students in BI classes have heard me explain ("rant") about left-to-right versus right-to-left design. To understand what I mean about these two design options, let's start with a picture: When we design a business intelligence solution that includes some sort of traditional data warehouse or data mart design, we typically place the data sources on the left, the new solution in the middle, and the users on the right. When I've been called in to help course-correct a failing BI project, I often find that IT has taken a left-to-right approach. They look at the data sources, decide how to model the BI solution as a _______ (fill in the blank with data warehouse, data mart, cube, etc.), and then build the new data structures and supporting infrastructure. (Sometimes, they actually do this without ever having talked to the business first.) Then, when they show what they've built to the business, the business says that is not what we want. Uh-oh. I prefer to take a right-to-left approach. Preferably at the beginning of a project. But even if the project starts left-to-right, I'll do my best to swing it around so that we’re back to a right-to-left approach. (When circumstances are beyond my control, I carry on, but it’s a painful project for everyone – not because of me, but because the approach just doesn’t get to what the business wants in the most effective way.) By using a right to left approach, I try to understand what it is the business is trying to accomplish. I do this by having them explain reports to me, and explaining the decision-making process that relates to these reports. Sometimes I have them explain to me their business processes, or better yet show me their business processes in action because I need pictures, too. I (unofficially) call this part of the project "getting inside the business's head." This is starting at the right side of the diagram above. My next step is to start moving leftward. I do this by preparing some type of prototype. Depending on the nature of the project, this might mean that I simply mock up some data in a relational database and build a prototype report in Reporting Services. If I'm lucky, I might be able to use real data in a relational database. I'll either use a subset of the data in the prototype report by creating a prototype database to hold the sample data, or select data directly from the source. It all depends on how much data there is, how complex the queries are, and how fast I need to get the prototype completed. If the solution will include Analysis Services, then I'll build a prototype cube. Analysis Services makes it incredibly easy to prototype. You can sit down with the business, show them the prototype, and have a meaningful conversation about what the BI solution should look like. I know I've done a good job on the prototype when I get knocked out of my chair so that the business user can explore the solution further independently. (That's really happened to me!) We can talk about dimensions, hierarchies, levels, members, measures, and so on with something tangible to look at and without using those terms. It's not helpful to use sample data like Adventure Works or to use BI terms that they don't really understand. But when I show them their data using the BI technology and talk to them in their language, then they truly have a picture worth a thousand words. From that, we can fine tune the prototype to move it closer to what they want. They have a better idea of what they're getting, and I have a better idea of what to build. So right to left design is not truly moving from the right to the left. But it starts from the right and moves towards the middle, and once I know what the middle needs to look like, I can then build from the left to meet in the middle. And that’s how I get past what the business says to what the business wants.

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • Wireless Problem on Acer Aspire 5610z

    - by Ugur Can Yalaki
    I installed ubuntu 12.04 on my machine, but I can't get wireless connection to work. My computer is Acer Aspire 5610z. I found that some other people that have same computer, face the same problem. Here is some information about it: ****** info trace ****** * uname -a * Linux ucy-Aspire-5610Z 3.8.0-32-generic #47~precise1-Ubuntu SMP Wed Oct 2 16:22:28 UTC 2013 i686 i686 i386 GNU/Linux * lsb_release * Distributor ID: Ubuntu Description: Ubuntu 12.04.3 LTS Release: 12.04 Codename: precise * lspci * 05:00.0 Network controller [0280]: Broadcom Corporation BCM4311 802.11b/g WLAN [14e4:4311] (rev 01) Subsystem: AMBIT Microsystem Corp. Device [1468:0422] Kernel driver in use: b43-pci-bridge 06:01.0 Ethernet controller [0200]: Broadcom Corporation BCM4401-B0 100Base-TX [14e4:170c] (rev 02) Subsystem: Acer Incorporated [ALI] Device [1025:0090] Kernel driver in use: b44 * lsusb * Bus 001 Device 004: ID 04e8:6863 Samsung Electronics Co., Ltd Bus 001 Device 002: ID 5986:0100 Acer, Inc Orbicam Bus 002 Device 002: ID 046d:c52f Logitech, Inc. Wireless Mouse M305 Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 003 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 004 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub Bus 005 Device 001: ID 1d6b:0001 Linux Foundation 1.1 root hub * PCMCIA Card Info * PRODID_1="" PRODID_2="" PRODID_3="" PRODID_4="" MANFID=0000,0000 FUNCID=255 * iwconfig * * rfkill * 0: acer-wireless: Wireless LAN Soft blocked: no Hard blocked: no * lsmod * ssb_hcd 12781 0 ssb 51554 2 ssb_hcd,b44 * nm-tool * NetworkManager Tool State: connected (global) Device: usb0 [Wired connection 2] ------------------------------------------- Type: Wired Driver: rndis_host State: connected Default: yes HW Address: Capabilities: Carrier Detect: yes Wired Properties Carrier: on IPv4 Settings: Address: 192.168.42.7 Prefix: 24 (255.255.255.0) Gateway: 192.168.42.129 DNS: 192.168.42.129 IPv6 Settings: Address: ::a05d:a1ff:fea4:1738 Prefix: 64 Gateway: fe80::504d:76ff:fe86:db04 Address: fe80::a05d:a1ff:fea4:1738 Prefix: 64 Gateway: fe80::504d:76ff:fe86:db04 DNS: fe80::504d:76ff:fe86:db04 Device: eth2 ----------------------------------------------------------------- Type: Wired Driver: b44 State: unavailable Default: no HW Address: Capabilities: Carrier Detect: yes Wired Properties Carrier: off * NetworkManager.state * [main] NetworkingEnabled=true WirelessEnabled=true WWANEnabled=true WimaxEnabled=true * NetworkManager.conf * [main] plugins=ifupdown,keyfile dns=dnsmasq [ifupdown] managed=false * interfaces * auto lo iface lo inet loopback * iwlist * * resolv.conf * nameserver 127.0.0.1 * blacklist * [/etc/modprobe.d/blacklist-ath_pci.conf] blacklist ath_pci [/etc/modprobe.d/blacklist-bcm43.conf] blacklist b43 blacklist b43legacy blacklist ssb blacklist bcm43xx blacklist brcm80211 blacklist brcmfmac blacklist brcmsmac blacklist bcma [/etc/modprobe.d/blacklist.conf] blacklist evbug blacklist usbmouse blacklist usbkbd blacklist eepro100 blacklist de4x5 blacklist eth1394 blacklist snd_intel8x0m blacklist snd_aw2 blacklist i2c_i801 blacklist prism54 blacklist bcm43xx blacklist garmin_gps blacklist asus_acpi blacklist snd_pcsp blacklist pcspkr blacklist amd76x_edac * modinfo * filename: /lib/modules/3.8.0-32-generic/kernel/drivers/usb/host/ssb-hcd.ko license: GPL description: Common USB driver for SSB Bus author: Hauke Mehrtens srcversion: E127A51EDC8F44D2C2A8F15 alias: ssb:v4243id0819rev* alias: ssb:v4243id0817rev* alias: ssb:v4243id0808rev* depends: ssb intree: Y vermagic: 3.8.0-32-generic SMP mod_unload modversions 686 filename: /lib/modules/3.8.0-32-generic/kernel/drivers/ssb/ssb.ko license: GPL description: Sonics Silicon Backplane driver srcversion: 14621F6EC014F731244437C alias: pci:v000014E4d00004350sv*sd*bc*sc*i* alias: pci:v000014E4d0000432Csv*sd*bc*sc*i* alias: pci:v000014E4d0000432Bsv*sd*bc*sc*i* alias: pci:v000014E4d00004329sv*sd*bc*sc*i* alias: pci:v000014E4d00004328sv*sd*bc*sc*i* alias: pci:v000014E4d00004325sv*sd*bc*sc*i* alias: pci:v000014E4d00004324sv*sd*bc*sc*i* alias: pci:v000014E4d0000A8D6sv*sd*bc*sc*i* alias: pci:v000014E4d00004322sv*sd*bc*sc*i* alias: pci:v000014E4d00004321sv*sd*bc*sc*i* alias: pci:v000014E4d00004320sv*sd*bc*sc*i* alias: pci:v000014E4d00004319sv*sd*bc*sc*i* alias: pci:v000014A4d00004318sv*sd*bc*sc*i* alias: pci:v000014E4d00004318sv*sd*bc*sc*i* alias: pci:v000014E4d00004315sv*sd*bc*sc*i* alias: pci:v000014E4d00004312sv*sd*bc*sc*i* alias: pci:v000014E4d00004311sv*sd*bc*sc*i* alias: pci:v000014E4d00004307sv*sd*bc*sc*i* alias: pci:v000014E4d00004306sv*sd*bc*sc*i* alias: pci:v000014E4d00004301sv*sd*bc*sc*i* depends: intree: Y vermagic: 3.8.0-32-generic SMP mod_unload modversions 686 * udev rules * PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb1:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth0" PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb2:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth1" PCI device 0x14e4:/sys/devices/pci0000:00/0000:00:1e.0/0000:06:01.0/ssb3:0 (b44) SUBSYSTEM=="net", ACTION=="add", DRIVERS=="?*", ATTR{address}=="", ATTR{dev_id}=="0x0", ATTR{type}=="1", KERNEL=="eth*", NAME="eth2" * dmesg * [ 2.385241] ssb: Found chip with id 0x4311, rev 0x01 and package 0x00 [ 2.385256] ssb: Core 0 found: ChipCommon (cc 0x800, rev 0x11, vendor 0x4243) [ 2.385266] ssb: Core 1 found: IEEE 802.11 (cc 0x812, rev 0x0A, vendor 0x4243) [ 2.385276] ssb: Core 2 found: USB 1.1 Host (cc 0x817, rev 0x03, vendor 0x4243) [ 2.385286] ssb: Core 3 found: PCI-E (cc 0x820, rev 0x01, vendor 0x4243) [ 2.448147] ssb: Sonics Silicon Backplane found on PCI device 0000:05:00.0 [ 2.468112] ssb: Found chip with id 0x4401, rev 0x02 and package 0x00 [ 2.468124] ssb: Core 0 found: Fast Ethernet (cc 0x806, rev 0x07, vendor 0x4243) [ 2.468132] ssb: Core 1 found: V90 (cc 0x807, rev 0x03, vendor 0x4243) [ 2.468140] ssb: Core 2 found: PCI (cc 0x804, rev 0x0A, vendor 0x4243) [ 2.508230] ssb: Sonics Silicon Backplane found on PCI device 0000:06:01.0 [ 2.528620] b44 ssb1:0 eth0: Broadcom 44xx/47xx 10/100 PCI ethernet driver ******** done ******** Thank you already for your help.

    Read the article

  • Having a hard time having consecutive animations for an attack

    - by Kelby Styler
    So I've been trying to figure this out for about 8 hours now...It's driving me nuts because I am pretty sure that it is something dead simple that I am just not understanding. I had everything working fine when I was just cycling through the animation: Idle - Attack - Attack 1 - Attack 2. Just in an infinite loop. The problem now is that I want it to go Attack - check if x time passes if ctrl pressed before x passes move to Attack 1, if not move back to Idle - Then either Attack 1 or Idle depending on how long has passed. I've almost gotten it a few time, but something always happens where it falls apart if I press ctrl too fast or after multiple cycles of the animation. Any help would be appreciated, I'm just at my wits end on this one. I've been looking at this so long that I just don't know where to go anymore. Code is below, here is the controller using UnityEngine; using System.Collections; public class MeleeAttack : MonoBehaviour { public int damage; public bool Attack; public bool Attack1; public bool Attack2; public bool Idle; private Animator animator; private int attnum = 0; private float count = 2f; private float timeLeft; //Gives value to damage output void MAttackDmg () { if (Input.GetKeyDown (KeyCode.RightControl) || Input.GetKeyDown (KeyCode.LeftControl)) { switch (attnum) { case (0): Attack = true; damage = 2; animator.SetBool ("Attack", Attack); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (1): Attack1 = true; damage = 2; animator.SetBool ("Attack1", Attack1); attnum++; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; case (2): Attack2 = true; damage = 2; animator.SetBool ("Attack2", Attack2); attnum = 0; Idle = false; animator.SetBool ("Idle", Idle); timeLeft = count; break; } } if (Input.GetKeyUp (KeyCode.RightControl) || Input.GetKeyUp (KeyCode.LeftControl)) { switch (attnum) { case (0): Debug.Log ("false"); damage = 0; if (timeLeft <= 0f) { Attack2 = false; animator.SetBool ("Attack2", Attack2); Debug.Log ("t1"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (1): Debug.Log ("false1"); damage = 0; if (timeLeft <= 0f) { Debug.Log ("t2"); Attack = false; animator.SetBool ("Attack", Attack); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; case (2): Debug.Log ("false2"); damage = 0; if (timeLeft <= 0f) { Attack1 = false; animator.SetBool ("Attack1", Attack1); Debug.Log ("t3"); Idle = true; animator.SetBool ("Idle", Idle); attnum = 0; timeLeft = count; } break; } } } // Use this for initialization void Awake () { animator = GetComponent<Animator> (); } // Update is called once per frame void Update () { timeLeft -= Time.deltaTime;; MAttackDmg (); } void Start (){ timeLeft = count; } }

    Read the article

  • Windows Azure Recipe: Enterprise LOBs

    - by Clint Edmonson
    Enterprises are more and more dependent on their specialized internal Line of Business (LOB) applications than ever before. Naturally, the more software they leverage on-premises, the more infrastructure they need manage. It’s frequently the case that our customers simply can’t scale up their hardware purchases and operational staff as fast as internal demand for software requires. The result is that getting new or enhanced applications in the hands of business users becomes slower and more expensive every day. Being able to quickly deliver applications in a rapidly changing business environment while maintaining high standards of corporate security is a challenge that can be met right now by moving enterprise LOBs out into the cloud and leveraging Azure’s Access Control services. In fact, we’re seeing many of our customers (both large and small) see huge benefits from moving their web based business applications such as corporate help desks, expense tracking, travel portals, timesheets, and more to Windows Azure. Drivers Cost Reduction Time to market Security Solution Here’s a sketch of how many Windows Azure Enterprise LOBs are being architected and deployed: Ingredients Web Role – this will host the core of the application. Each web role is a virtual machine hosting an application written in ASP.NET (or optionally php, or node.js). The number of web roles can be scaled up or down as needed to handle peak and non-peak traffic loads. Many Java based applications are also being deployed to Windows Azure with a little more effort. Database – every modern web application needs to store data. SQL Azure databases look and act exactly like their on-premise siblings but are fault tolerant and have data redundancy built in. Access Control – this service is necessary to establish federated identity between the cloud hosted application and an enterprise’s corporate network. It works in conjunction with a secure token service (STS) that is hosted on-premises to establish the corporate user’s identity and credentials. The source code for an on-premises STS is provided in the Windows Azure training kit and merely needs to be customized for the corporate environment and published on a publicly accessible corporate web site. Once set up, corporate users see a near seamless single sign-on experience. Reporting – businesses live and die by their reports and SQL Azure Reporting, based on SQL Server Reporting 2008 R2, can serve up reports with tables, charts, maps, gauges, and more. These reports can be accessed from the Windows Azure Portal, through a web browser, or directly from applications. Service Bus (optional) – if deep integration with other applications and systems is needed, the service bus is the answer. It enables secure service layer communication between applications hosted behind firewalls in on-premises or partner datacenters and applications hosted inside Windows Azure. The Service Bus provides the ability to securely expose just the information and services that are necessary to create a simpler, more secure architecture than opening up a full blown VPN. Data Sync (optional) – in cases where the data stored in the cloud needs to be shared internally, establishing a secure one-way or two-way data-sync connection between the on-premises and off-premises databases is a perfect option. It can be very granular, allowing us to specify exactly what tables and columns to synchronize, setup filters to sync only a subset of rows, set the conflict resolution policy for two-way sync, and specify how frequently data should be synchronized Training Labs These links point to online Windows Azure training labs where you can learn more about the individual ingredients described above. (Note: The entire Windows Azure Training Kit can also be downloaded for offline use.) Windows Azure (16 labs) Windows Azure is an internet-scale cloud computing and services platform hosted in Microsoft data centers, which provides an operating system and a set of developer services which can be used individually or together. It gives developers the choice to build web applications; applications running on connected devices, PCs, or servers; or hybrid solutions offering the best of both worlds. New or enhanced applications can be built using existing skills with the Visual Studio development environment and the .NET Framework. With its standards-based and interoperable approach, the services platform supports multiple internet protocols, including HTTP, REST, SOAP, and plain XML SQL Azure (7 labs) Microsoft SQL Azure delivers on the Microsoft Data Platform vision of extending the SQL Server capabilities to the cloud as web-based services, enabling you to store structured, semi-structured, and unstructured data. Windows Azure Services (9 labs) As applications collaborate across organizational boundaries, ensuring secure transactions across disparate security domains is crucial but difficult to implement. Windows Azure Services provides hosted authentication and access control using powerful, secure, standards-based infrastructure. See my Windows Azure Resource Guide for more guidance on how to get started, including links web portals, training kits, samples, and blogs related to Windows Azure.

    Read the article

  • Getting a handle on mobile data

    - by Eric Jensen
    Normal 0 false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin-top:0in; mso-para-margin-right:0in; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0in; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin;} written by Ashok Joshi The proliferation of mobile devices in the corporate world is both a blessing as well as a challenge.  Mobile devices improve productivity and the velocity of business for the end users; on the other hand, IT departments need to manage the corporate data and applications that run on these devices. Oracle Database Mobile Server (DMS for short) provides a simple and effective way to deal with the management challenge.  DMS supports data synchronization between a central Oracle database server and data on mobile devices.  It also provides authentication, encryption and application and device management.  Finally, DMS is a highly scalable solution that can be used to manage hundreds of thousands of devices.   Here’s a simplified outline of how such a solution might work. Each device runs local sync and mgmt agents that handle bidirectional data flow with an Oracle enterprise backend, run remote commands, and provide status to the management console. For example, mobile admins could monitor multiple networks of mobile devices, upgrade their software remotely, and even destroy the local database on a compromised device. DMS supports either Oracle Berkeley DB or SQLite for device-local storage, and runs on a wide variety of mobile platforms. The schema for the device-local database is pretty simple – it contains the name of the application that’s installed on the device as well as details such as product name, version number, time of last access etc. Each mobile user has an account on the monitoring system.  DMS supports authentication via the Oracle database authentication mechanisms or alternately, via an external authentication server such as Oracle Identity Management. DMS also provides the option of encrypting the data on disk as well as while it is being synchronized. Whenever a device connects with DMS, it sends the list of all local application changes to the server; the server updates the central repository with this information.  Synchronization can be triggered on-demand, whenever there’s a change on the device (e.g. new application installed or an existing application removed) or via a rule-based schedule (e.g. every Saturday). Synchronization is very fast and efficient, since only the changes are propagated.  This includes resume capability; should synchronization be interrupted for any reason, the next synchronization will resume where the previous synchronization was interrupted. If the device should be lost or stolen, DMS has the capability to remove the applications and/or data from the device. This ability to control access to sensitive data and applications is critical in the corporate environment. The central repository also allows the IT manager to track the kinds of applications that mobile users use and recommend patches and upgrades, while still allowing the mobile user full control over what applications s/he downloads and uses on the device.  This is useful since most devices are used for corporate as well as personal information. In certain restricted use scenarios, the IT manager can also control whether a certain application can be installed on a mobile device.  Should an unapproved application be installed, it can easily be removed the next time the device connects with the central server. Oracle Database mobile server provides a simple, effective and highly secure and scalable solution for managing the data and applications for the mobile workforce.

    Read the article

  • Einladung zur Oracle SE University am 13./14. Dezember 2011

    - by A&C Redaktion
    Sehr geehrte Oracle Partner, die erste Oracle SE University wird von Azlan und Oracle gemeinsam ins Leben gerufen, dazu laden wir Sie herzlich ein. Zielgruppe sind die technischen Ansprechpartner aller Oracle Partner. Wir bieten Ihnen in Fulda einen umfassenden Überblick über aktuelle Technologien, Produkte und Dienstleistungen mit den Schwerpunkten Oracle on Oracle, Positionierung und Architektur. Dabei werden sowohl bewährte Software-Produktbereiche wie Datenbank und Fusion Middleware beleuchtet, als auch klassische Hardware-Themen wie Systems, Storage und Virtualisierung. Die Agenda finden Sie hier. Top-Referenten garantieren Ihnen qualitativ hochwertige und technisch anspruchsvolle Vorträge.  Projektberichte aus der Praxis bringen Ihnen Kernthemen näher, um mit diesem Wissen zusätzlichen Umsatz zu generieren. Nutzen Sie darüber hinaus die Möglichkeiten zum Networking, die im täglichen Geschäft Gold wert sind.Logistische Informationen: Termin: 13. - 14. Dezember 2011 Ort: Fulda - bietet als die Barockstadt Deutschlands einen reizvollen Kontrast zu unseren technischen Themen. Aber vor allem liegt Fulda fast genau in der geographischen Mitte Deutschlands und ist via einem modernen ICE-Bahnhof bestens von ganz Deutschland aus zu erreichen. Hotel Esperanto Kongress- und Kulturzentrum Fulda: Vom Bahnhof aus gehen Sie gemütliche zwei Minuten um die Ecke und kommen direkt ins Hotel. Für PKW Fahrer sind ausreichend Parkplätze direkt am Hotel vorhanden. Zielgruppe: SEs, technischer Vertrieb, technische Consultants Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin-top:0cm; mso-para-margin-right:0cm; mso-para-margin-bottom:10.0pt; mso-para-margin-left:0cm; line-height:115%; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-fareast-language:EN-US;} Konzept der Oracle SE University: Plenum Sessions und Keynote mit strategischen Übersichtsthemen Break Out Sessions (4 Sessions parallel) mit technischem Tiefgang Technologie, Projekterfahrungen, Architekturen, Lösungsszenarien u.v.m. Networking: bei einem gemeinsamen Abendessen am 13.12., ab 19.00 Uhr. Im hauseigenen Grill-Restaurant "El Toro Nero" gibt es die brasilianische Spezialität "Rodizio". Die Teilnahme zur Oracle SE University inklusive dem gemeinsamen Abendessen ist für Sie kostenfrei, die Übernachtungskosten werden von Ihnen selbst getragen. Melden Sie sich bitte bis spätestens 30.11. zur Oracle SE University hier an. Wir haben ein Zimmer-Kontingent reserviert, die Buchung bitten wir Sie online selbst vorzunehmen. Bitte geben Sie bei der Buchung das Stichwort „Oracle SE Uni“ an, damit erhalten Sie den Sonderpreis von 102,- Euro inklusive Frühstück. Wir freuen uns, Sie in Fulda zu begrüßen! Joachim Hissmann Birgit Nehring Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Manager Key Partner HW Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Direktor Software & Solution Normal 0 21 false false false DE X-NONE X-NONE MicrosoftInternetExplorer4 /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0cm 5.4pt 0cm 5.4pt; mso-para-margin:0cm; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Oracle Deutschland Tech Data/Azlan ================================================================= Kontakte Azlan:Peter MosbauerTel.: 089 [email protected] Robert BacciTel.: 089 4700-3018 [email protected] Oracle:Regina SteyerTel.: 0211 [email protected]

    Read the article

  • Adventures in Windows 8: Understanding and debugging design time data in Expression Blend

    - by Laurent Bugnion
    One of my favorite features in Expression Blend is the ability to attach a Visual Studio debugger to Blend. First let’s start by answering the question: why exactly do you want to do that? Note: If you are familiar with the creation and usage of design time data, feel free to scroll down to the paragraph titled “When design time data fails”. Creating design time data for your app When a designer works on an app, he needs to see something to design. For “static” UI such as buttons, backgrounds, etc, the user interface elements are going to show up in Blend just fine. If however the data is fetched dynamically from a service (web, database, etc) or created dynamically, most probably Blend is going to show just an empty element. The classical way to design at that stage is to run the application, navigate to the screen that is under construction (which can involve delays, need to log in, etc…), to measure what is on the screen (colors, margins, width and height, etc) using various tools, going back to Blend, editing the properties of the elements, running again, etc. Obviously this is not ideal. The solution is to create design time data. For more information about the creation of design time data by mocking services, you can refer to two talks of mine “Deep dive MVVM” and “MVVM Applied From Silverlight to Windows Phone to Windows 8”. The source code for these talks is here and here. Design time data in MVVM Light One of the main reasons why I developed MVVM Light is to facilitate the creation of design time data. To illustrate this, let’s create a new MVVM Light application in Visual Studio. Install MVVM Light from here: http://mvvmlight.codeplex.com (use the MSI in the Download section). After installing, make sure to read the Readme that opens up in your favorite browser, you will need one more step to install the Project Templates. Start Visual Studio 2012. Create a new MvvmLight (Win8) app. Run the application. You will see a string showing “Welcome to MVVM Light”. In the Solution explorer, right click on MainPage.xaml and select Open in Blend. Now you should see “Welcome to MVVM Light [Design]” What happens here is that Expression Blend runs different code at design time than the application runs at runtime. To do this, we use design-time detection (as explained in a previous article) and use that information to initialize a different data service at design time. To understand this better, open the ViewModelLocator.cs file in the ViewModel folder and see how the DesignDataService is used at design time, while the DataService is used at runtime. In a real-life applicationm, DataService would be used to connect to a web service, for instance. When design time data fails Sometimes however, the creation of design time data fails. It can be very difficult to understand exactly what is happening. Expression Blend is not giving a lot of information about what happened. Thankfully, we can use a trick: Attaching a debugger to Expression Blend and debug the design time code. In WPF and Silverlight (including Windows Phone 7), you could simply attach the debugger to Blend.exe (using the “Managed (v4.5, v4.0) code” option even for Silverlight!!) In Windows 8 however, things are just a bit different. This is because the designer that renders the actual representation of the Windows 8 app runs in its own process. Let’s illustrate that: Open the file DesignDataService in the Design folder. Modify the GetData method to look like this: public void GetData(Action<DataItem, Exception> callback) { throw new Exception(); // Use this to create design time data var item = new DataItem("Welcome to MVVM Light [design]"); callback(item, null); } Go to Blend and build the application. The build succeeds, but now the page is empty. The creation of the design time data failed, but we don’t get a warning message. We need to investigate what’s wrong. Close MainPage.xaml Go to Visual Studio and select the menu Debug, Attach to Process. Update: Make sure that you select “Managed (v4.5, v4.0) code” in the “Attach to” field. Find the process named XDesProc.exe. You should have at least two, one for the Visual Studio 2012 designer surface, and one for Expression Blend. Unfortunately in this screen it is not obvious which is which. Let’s find out in the Task Manager. Press Ctrl-Alt-Del and select Task Manager Go to the Details tab and sort the processes by name. Find the one that says “Blend for Microsoft Visual Studio 2012 XAML UI Designer” and write down the process ID. Go back to the Attach to Process dialog in Visual Studio. sort the processes by ID and attach the debugger to the correct instance of XDesProc.exe. Open the MainViewModel (in the ViewModel folder) Place a breakpoint on the first line of the MainViewModel constructor. Go to Blend and open the MainPage.xaml again. At this point, the debugger breaks in Visual Studio and you can execute your code step by step. Simply step inside the dataservice call, and find the exception that you had placed there. Visual Studio gives you additional information which helps you to solve the issue. More info and Conclusion I want to thank the amazing people on the Expression Blend team for being very fast in guiding me in that matter and encouraging me to blog about it. More information about the XDesProc.exe process can be found here. I had to work on a Windows 8 app for a few days without design time data because of an Exception thrown somewhere in the code, and it was really painful. With the debugger, finding the issue was a simple matter of stepping into the code until it threw the exception.   Laurent Bugnion (GalaSoft) Subscribe | Twitter | Facebook | Flickr | LinkedIn

    Read the article

  • External drive hanging, load average through the roof

    - by Paul Tomblin
    I have an external USB drive, and I run an hourly rsync to it as a backup. This has been working fine for years. This weekend, I got two new 2Tb internal drives, and decided it was time to re-install Ubuntu from scratch to clear out all the old cruft. About once a day since the re-install, the backup script hangs hard, usually in the "rm -rf" I do before the rsync. By the time I notice the problem, my load average is in the stratosphere and climbing fast (one time, it was over 150), but anything that doesn't touch the drive seems to be running fine. One thing that I find suspicious is that something, I don't know what, is doing a "smartctl" and a "hdparm" command on the USB drive. I'm pretty sure smartctl isn't supposed to run on external drives. I can't figure out what's doing it, either. Here's part of ps auwwfx when it's hung: root 7310 0.0 0.0 4248 352 ? D 20:15 0:00 /sbin/hdparm -C /dev/sdd root 7808 0.0 0.0 17372 1632 ? D 20:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 8427 0.0 0.0 4248 356 ? D 20:20 0:00 /sbin/hdparm -C /dev/sdd root 8925 0.0 0.0 17372 1628 ? D 20:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 9529 0.0 0.0 4248 356 ? D 20:25 0:00 /sbin/hdparm -C /dev/sdd root 10026 0.0 0.0 17372 1628 ? D 20:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 10655 0.0 0.0 4248 356 ? D 20:30 0:00 /sbin/hdparm -C /dev/sdd root 11151 0.0 0.0 17372 1632 ? D 20:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 11774 0.0 0.0 4248 356 ? D 20:35 0:00 /sbin/hdparm -C /dev/sdd root 12271 0.0 0.0 17372 1628 ? D 20:35 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 12878 0.0 0.0 4248 352 ? D 20:40 0:00 /sbin/hdparm -C /dev/sdd root 13374 0.0 0.0 17372 1632 ? D 20:40 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 14011 0.0 0.0 4248 352 ? D 20:45 0:00 /sbin/hdparm -C /dev/sdd root 14507 0.0 0.0 17372 1628 ? D 20:45 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 15116 0.0 0.0 4248 352 ? D 20:50 0:00 /sbin/hdparm -C /dev/sdd root 15612 0.0 0.0 17372 1632 ? D 20:50 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 16223 0.0 0.0 4248 352 ? D 20:55 0:00 /sbin/hdparm -C /dev/sdd root 16734 0.0 0.0 17372 1632 ? D 20:55 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 17345 0.0 0.0 4248 352 ? D 21:00 0:00 /sbin/hdparm -C /dev/sdd root 17842 0.0 0.0 17372 1628 ? D 21:00 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 18463 0.0 0.0 4248 352 ? D 21:05 0:00 /sbin/hdparm -C /dev/sdd root 18960 0.0 0.0 17372 1628 ? D 21:05 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 19598 0.0 0.0 4248 356 ? D 21:10 0:00 /sbin/hdparm -C /dev/sdd root 20096 0.0 0.0 17372 1628 ? D 21:10 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 21280 0.0 0.0 4244 356 ? D 21:15 0:00 /sbin/hdparm -C /dev/sdd root 21784 0.0 0.0 17372 1632 ? D 21:15 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 22414 0.0 0.0 4244 356 ? D 21:20 0:00 /sbin/hdparm -C /dev/sdd root 22912 0.0 0.0 17372 1628 ? D 21:20 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 23541 0.0 0.0 4244 356 ? D 21:25 0:00 /sbin/hdparm -C /dev/sdd root 24038 0.0 0.0 17372 1632 ? D 21:25 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd root 24658 0.0 0.0 4244 356 ? D 21:30 0:00 /sbin/hdparm -C /dev/sdd root 25157 0.0 0.0 17372 1628 ? D 21:30 0:00 /usr/sbin/smartctl -a -n standby -A -i /dev/sdd Why is this happening, and how can I stop it?

    Read the article

  • PASS Summit 2011 &ndash; Part III

    - by Tara Kizer
    Well we’re about a month past PASS Summit 2011, and yet I haven’t finished blogging my notes! Between work and home life, I haven’t been able to come up for air in a bit.  Now on to my notes… On Thursday of the PASS Summit 2011, I attended Klaus Aschenbrenner’s (blog|twitter) “Advanced SQL Server 2008 Troubleshooting”, Joe Webb’s (blog|twitter) “SQL Server Locking & Blocking Made Simple”, Kalen Delaney’s (blog|twitter) “What Happened? Exploring the Plan Cache”, and Paul Randal’s (blog|twitter) “More DBA Mythbusters”.  I think my head grew two times in size from the Thursday sessions.  Just WOW! I took a ton of notes in Klaus' session.  He took a deep dive into how to troubleshoot performance problems.  Here is how he goes about solving a performance problem: Start by checking the wait stats DMV System health Memory issues I/O issues I normally start with blocking and then hit the wait stats.  Here’s the wait stat query (Paul Randal’s) that I use when working on a performance problem.  He highlighted a few waits to be aware of such as WRITELOG (indicates IO subsystem problem), SOS_SCHEDULER_YIELD (indicates CPU problem), and PAGEIOLATCH_XX (indicates an IO subsystem problem or a buffer pool problem).  Regarding memory issues, Klaus recommended that as a bare minimum, one should set the “max server memory (MB)” in sp_configure to 2GB or 10% reserved for the OS (whichever comes first).  This is just a starting point though! Regarding I/O issues, Klaus talked about disk partition alignment, which can improve SQL I/O performance by up to 100%.  You should use 64kb for NTFS cluster, and it’s automatic in Windows 2008 R2. Joe’s locking and blocking presentation was a good session to really clear up the fog in my mind about locking.  One takeaway that I had no idea could be done was that you can set a timeout in T-SQL code view LOCK_TIMEOUT.  If you do this via the application, you should trap error 1222. Kalen’s session went into execution plans.  The minimum size of a plan is 24k.  This adds up fast especially if you have a lot of plans that don’t get reused much.  You can use sys.dm_exec_cached_plans to check how often a plan is being reused by checking the usecounts column.  She said that we can use DBCC FLUSHPROCINDB to clear out the stored procedure cache for a specific database.  I didn’t know we had this available, so this was great to hear.  This will be less intrusive when an emergency comes up where I’ve needed to run DBCC FREEPROCCACHE. Kalen said one should enable “optimize for ad hoc workloads” if you have an adhoc loc.  This stores only a 300-byte stub of the first plan, and if it gets run again, it’ll store the whole thing.  This helps with plan cache bloat.  I have a lot of systems that use prepared statements, and Kalen says we simulate those calls by using sp_executesql.  Cool! Paul did a series of posts last year to debunk various myths and misconceptions around SQL Server.  He continues to debunk things via “DBA Mythbusters”.  You can get a PDF of a bunch of these here.  One of the myths he went over is the number of tempdb data files that you should have.  Back in 2000, the recommendation was to have as many tempdb data files as there are CPU cores on your server.  This no longer holds true due to the numerous cores we have on our servers.  Paul says you should start out with 1/4 to 1/2 the number of cores and work your way up from there.  BUT!  Paul likes what Bob Ward (twitter) says on this topic: 8 or less cores –> set number of files equal to the number of cores Greater than 8 cores –> start with 8 files and increase in blocks of 4 One common myth out there is to set your MAXDOP to 1 for an OLTP workload with high CXPACKET waits.  Instead of that, dig deeper first.  Look for missing indexes, out-of-date statistics, increase the “cost threshold for parallelism” setting, and perhaps set MAXDOP at the query level.  Paul stressed that you should not plan a backup strategy but instead plan a restore strategy.  What are your recoverability requirements?  Once you know that, now plan out your backups. As Paul always does, he talked about DBCC CHECKDB.  He said how fabulous it is.  I didn’t want to interrupt the presentation, so after his session had ended, I asked Paul about the need to run DBCC CHECKDB on your mirror systems.  You could have data corruption occur at the mirror and not at the principal server.  If you aren’t checking for data corruption on your mirror systems, you could be failing over to a corrupt database in the case of a disaster or even a planned failover.  You can’t run DBCC CHECKDB against the mirrored database, but you can run it against a snapshot off the mirrored database.

    Read the article

  • A little primer on using TFS with a small team

    - by johndoucette
    The scenario; A small team of 3 developers mostly in maintenance mode with traditional ASP.net, classic ASP, .Net integration services and utilities with the company’s third party packages, and a bunch of java-based Coldfusion web applications all under Visual Source Safe (VSS). They are about to embark on a huge SharePoint 2010 new construction project and wanted to use subversion instead VSS. TFS was a foreign word and smelled of “high cost” and of an “over complicated process”. Since they had no preconditions about the old TFS versions (‘05 & ‘08), it was fun explaining how simple it was to install a TFS server and get the ball rolling, with or without all the heavy stuff one sometimes associates with such a huge and powerful application management lifecycle product. So, how does a small team begin using TFS? 1. Start by using source control and migrate current VSS source trees into TFS. You can take the latest version or migrate the entire version history. It’s up to you on whether you want a clean start or need quick access to all the version notes and history of the bits. 2. Since most shops are mainly in maintenance mode with existing applications, begin using bug workitems for everything. When you receive an issue/bug from your current tracking system, manually enter the workitem in TFS right through Visual Studio. You can automate the integration to the current tracking system later or replace it entirely. Believe me, this thing is powerful and can handle even the largest of help desks. 3. With new construction, begin work with requirements and task workitems and follow the traditional sprint-based development lifecycle. Obviously, some minor training will be needed, but don’t fear, this is very intuitive and MSDN has a ton of lesson based labs and videos. 4. For the java developers, use the new Team Explorer Everywhere 2010 plugin (recently known as Teamprise). There is a seamless interface in Eclipse, but also a good command-line utility for other environments such as Dreamweaver. 5. Wait to fully integrate the whole workitem/project management/testing process until your team is familiar with the integrated workitems for bugs and code. After a while, you will see the team wanting more transparency into the work they are all doing and naturally, everyone will want workitems to help them organize the chaos! 6. Management will be limited in the value of the reports until you have a fully blown implementation of project planning, construction, build, deployment and testing. However, there are some basic “bug rate” reports and current backlog listings that can provide good information. Some notable explanations of TFS; Work Item Tracking and Project Management - A workitem represents the unit of work within the system which enables tracking of all activities produced by a user, whether it is a developer, business user, project manager or tester. The properties of a workitem such as linked changesets (checked-in code), who updated the data and when, the states and reasons for change, are all transitioned to a data warehouse within TFS for reporting purposes. A workitem can be defines as a "bug", "requirement", test case", or a "change request". They drive the work effort by the individual assigned to it and also provide a key role in defining what needs to be done. Workitems are the things the team needs to do to accomplish a goal. Test Case Management - Starting with a workitem known as a "test case", a tester (or developer) can now author and manage test cases within a formal test plan subsystem. Although TFS supports the test case workitem type, there is a new product known as the VS Test Professional 2010 which allows a tester to facilitate manual tests including fast forwarding steps in the process to arrive at the assertion point quickly. This repeatable process provides quick regression tests and can be conducted by the business user to ensure completeness during UAT. In addition, developers no longer can provide a response to a bug with the line "cannot reproduce". With every test run, attachments including the recorded session, captured environment configurations and settings, screen shots, intellitrace (debugging history), and in some cases if the lab manager is being used, a snapshot of the tested environment is available. Version Control - A modern system allowing shared check-in/check-out, excellent merge conflict resolution, Shelvesets (personal check-ins), branching/merging visualization, public workspaces, gated check-ins, security hierarchy capabilities, and changeset/workitem tracking. Knowing what was done with the code by any developer has become much easier to picture and resolve issues. Team Build - Automate the compilation process whether you need it to be whenever a developer checks-in code, periodically such as nightly builds for testers in the morning, or manual builds to be deployed into production. Each build can run through pre-determined tests, perform code analysis to see if the developer conforms to the team standards, and reject the build if either fails. Project Portal & Reporting - Provide management with a dashboard with insight into the project(s). "Where are we" in each step of the way including past iterations and the current burndown rate. Enabling this feature is easy as it seamlessly interfaces with existing SharePoint implementations.

    Read the article

  • How to Play PC Games on Your TV

    - by Chris Hoffman
    No need to wait for Valve’s Steam Machines — connect your Windows gaming PC to your TV and use powerful PC graphics in the living room today. It’s easy — you don’t need any unusual hardware or special software. This is ideal if you’re already a PC gamer who wants to play your games on a larger screen. It’s also convenient if you want to play multiplayer PC games with controllers in your living rom. HDMI Cables and Controllers You’ll need an HDMI cable to connect your PC to your television. This requires a TV with HDMI-in, a PC with HDMI-out, and an HDMI cable. Modern TVs and PCs have had HDMI built in for years, so you should already be good to go. If you don’t have a spare HDMI cable lying around, you may have to buy one or repurpose one of your existing HDMI cables. Just don’t buy the expensive HDMI cables — even a cheap HDMI cable will work just as well as a more expensive one. Plug one end of the HDMI cable into the HDMI-out port on your PC and one end into the HDMI-In port on your TV. Switch your TV’s input to the appropriate HDMI port and you’ll see your PC’s desktop appear on your TV.  Your TV becomes just another external monitor. If you have your TV and PC far away from each other in different rooms, this won’t work. If you have a reasonably powerful laptop, you can just plug that into your TV — or you can unplug your desktop PC and hook it up next to your TV. Now you’ll just need an input device. You probably don’t want to sit directly in front of your TV with a wired keyboard and mouse! A wireless keyboard and wireless mouse can be convenient and may be ideal for some games. However, you’ll probably want a game controller like console players use. Better yet, get multiple game controllers so you can play local-multiplayer PC games with other people. The Xbox 360 controller is the ideal controller for PC gaming. Windows supports these controllers natively, and many PC games are designed specifically for these controllers. Note that Xbox One controllers aren’t yet supported on Windows because Microsoft hasn’t released drivers for them. Yes, you could use a third-party controller or go through the process of pairing a PlayStation controller with your PC using unofficial tools, but it’s better to get an Xbox 360 controller. Just plug one or more Xbox controllers into your PC’s USB ports and they’ll work without any setup required. While many PC games to support controllers, bear in mind that some games require a keyboard and mouse. A TV-Optimized Interface Use Steam’s Big Picture interface to more easily browse and launch games. This interface was designed for using on a television with controllers and even has an integrated web browser you can use with your controller. It will be used on the Valve’s Steam Machine consoles as the default TV interface. You can use a mouse with it too, of course. There’s also nothing stopping you from just using your Windows desktop with a mouse and keyboard — aside from how inconvenient it will be. To launch Big Picture Mode, open Steam and click the Big Picture button at the top-right corner of your screen. You can also press the glowing Xbox logo button in the middle of an Xbox 360 Controller to launch the Big Picture interface if Steam is open. Another Option: In-Home Streaming If you want to leave your PC in one room of your home and play PC games on a TV in a different room, you can consider using local streaming to stream games over your home network from your gaming PC to your television. Bear in mind that the game won’t be as smooth and responsive as it would if you were sitting in front of your PC. You’ll also need a modern router with fast wireless network speeds to keep up with the game streaming. Steam’s built-in In-Home Streaming feature is now available to everyone. You could plug a laptop with less-powerful graphics hardware into your TV and use it to stream games from your powerful desktop gaming rig. You could also use an older desktop PC you have lying around. To stream a game, log into Steam on your gaming PC and log into Steam with the same account on another computer on your home network. You’ll be able to view the library of installed games on your other PC and start streaming them. NVIDIA also has their own GameStream solution that allows you to stream games from a PC with powerful NVIDIA graphics hardware. However, you’ll need an NVIDIA Shield handheld gaming console to do this. At the moment, NVIDIA’s game streaming solution can only stream to the NVIDIA Shield. However, the NVIDIA Shield device can be connected to your TV so you can play that streaming game on your TV. Valve’s Steam Machines are supposed to bring PC gaming to the living room and they’ll do it using HDMI cables, a custom Steam controller, the Big Picture interface, and in-home streaming for compatibility with Windows games. You can do all of this yourself today — you’ll just need an Xbox 360 controller instead of the not-yet-released Steam controller. Image Credit: Marco Arment on Flickr, William Hook on Flickr, Lewis Dowling on Flickr

    Read the article

  • A Gentle Introduction to NuGet

    - by Joe Mayo
    Not too long ago, Microsoft released, NuGet, an automated package manager for Visual Studio.  NuGet makes it easy to download and install assemblies, and their references, into a Visual Studio project.  These assemblies, which I loosely refer to as packages, are often open source, and include projects such as LINQ to Twitter. In this post, I'll explain how to get started in using NuGet with your projects to include: installng NuGet, installing/uninstalling LINQ to Twitter via console command, and installing/uninstalling LINQ to Twitter via graphical reference menu. Installing NuGet The first step you'll need to take is to install NuGet.  Visit the NuGet site, at http://nuget.org/, click on the Install NuGet button, and download the NuGet.Tools.vsix installation file, shown below. Each browser is different (i.e. FireFox, Chrome, IE, etc), so you might see options to run right away, save to a location, or access to the file through the browser's download manager.  Regardless of how you receive the NuGet installer, execute the downloaded NuGet.Tools.vsix to install Nuget into visual Studio. The NuGet Footprint When you open visual Studio, observe that there is a new menu option on the Tools menu, titled Library Package Manager; This is where you use NuGet.  There are two menu options, from the Library Package Manager Menu that you can use: Package Manager Console and Package Manager Settings.  I won't discuss Package Manager Settings in this post, except to give you a general idea that, as one of a set of capabilities, it manages the path to the NuGet server, which is already set for you. Another menu, added by the NuGet installer, is Add Library Package Reference, found by opening the context menu for either a Solution Explorer project or a project's References folder or via the Project menu.  I'll discuss how to use this later in the post. The following discussion is concerned with the other menu option, Package Manager Console, which allows you to manage NuGet packages. Gettng a NuGet Package Selecting Tools -> Library Package Manager -> Package Manager Console opens the Package Manager Console.  As you can see, below, the Package Manager Console is text-based and you'll need to type in commands to work with packages. In this post, I'll explain how to use the Package Manager Console to install LINQ to Twitter, but there are many more commands, explained in the NuGet Package Manager Console Commands documentation.  To install LINQ to Twitter, open your current project where you want LINQ to Twitter installed, and type the following at the PM> prompt: Install-Package linqtotwitter If all works well, you'll receive a confirmation message, similar to the following, after a brief pause: Successfully installed 'linqtotwitter 2.0.20'. Successfully added 'linqtotwitter 2.0.20' to NuGetInstall. Also, observe that a reference to the LinqToTwitter.dll assembly was added to your current project. Uninstalling a NuGet Package I won't be so bold as to assume that you would only want to use LINQ to Twitter because there are other Twitter libraries available; I recommend Twitterizer if you don't care for LINQ to Twitter.  So, you might want to use the following command at the PM> prompt to remove LINQ to Twitter from your project: Uninstall-Package linqtotwitter After a brief pause, you'll see a confirmation message similar to the following: Successfully removed 'linqtotwitter 2.0.20' from NuGetInstall. Also, observe that the LinqToTwitter.dll assembly no longer appears in your project references list. Sometimes using the Package Manager Console is required for more sophisticated scenarios.  However, LINQ to Twitter doesn't have any dependencies and is a very simple install, so you can use another method of installing graphically, which I'll show you next. Graphical Installations As explained earlier, clicking Add Library Package Reference, from the context menu for either a Solution Explorer project or a project's References folder or via the Project menu opens the Add Library Package Reference window. This window will allow you to add a reference a NuGet package in your project. To the left of the window are a few accordian folders to help you find packages that are either on-line or already installed.  Just like the previous section, I'll assume you are installing LINQ to Twitter for the first time, so you would select the Online folder and click All.  After waiting for package descriptions to download, you'll notice that there are too many to scroll through in a short period of time, over 900 as I write this.  Therefore, use the search box located at the top right corner of the window and type LINQ to Twitter as I've done in the previous figure. You'll see LINQ to Twitter appear in the list. Click the Install button on the LINQ to Twitter entry. If the installation was successful, you'll see a message box display and disappear quickly (or maybe not if your machine is very fast or you blink at that moment). Then you'll see a reference to the LinqToTwitter.dll assembly in your project's references list. Note: While running this demo, I ran into an issue where VS had created a file lock on an installation folder without releasing it, causing an error with "packagename already exists. Skipping..." and then an error describing that it couldn't write to a destination folder.  I resolved the problem by closing and reopening VS. If you open the Add a Library Package Reference window again, you'll see LINQ to Twitter listed in the Recent packages folder. Summary You can install NuGet via the on-line home page with a click of a button.  Nuget provides two ways to work with packages, via console or graphical window.  While the graphical window is easiest, the console window is more powerful. You can now quickly add project references to many available packages via the NuGet service. Joe

    Read the article

  • How to configure TATA Photon+ EC1261 HUAWEI

    - by user3215
    I'm running ubuntu 10.04. I have a newly purchased TATA Photon+ Internet connection which supports Windows and Mac. On the Internet I found a article saying that it could be configured on Linux. I followed the steps to install it on Ubuntu from this link. I am still not able to get online, and need some help. Also, it is very slow, but I was told that I would see speeds up to 3.1MB. I dont have wvdial installed and cannot install it from apt as I'm not connected to internet Booting from windows I dowloaded "wvdial" .deb package and tried to install on ubuntu but it's ended with dependency problem. Automatically, don't know how, I got connected to internet only for once. Immediately I installed wvdial package after this I followed the tutorials(I could not browse and upload the files here) . From then it's showing that the device is connected in the network connections but no internet connection. Once I disable the device, it won't show as connected again and I'll have to restart my system. Sometimes the device itself not detected(wondering if there is any command to re-read the all devices). output of wvdialconf /etc/wvdial.cof: #wvdialconf /etc/wvdial.conf Editing `/etc/wvdial.conf'. Scanning your serial ports for a modem. ttyS0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyS0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 115200 baud ttyS0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. Modem Port Scan<*1>: S1 S2 S3 WvModem<*1>: Cannot get information for serial port. ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB0<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 2400 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- failed with 9600 baud, next try: 9600 baud ttyUSB1<*1>: ATQ0 V1 E1 -- and failed too at 115200, giving up. WvModem<*1>: Cannot get information for serial port. ttyUSB2<*1>: ATQ0 V1 E1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 Z -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 -- OK ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK ttyUSB2<*1>: Modem Identifier: ATI -- Manufacturer: +GMI: HUAWEI TECHNOLOGIES CO., LTD ttyUSB2<*1>: Speed 9600: AT -- OK ttyUSB2<*1>: Max speed is 9600; that should be safe. ttyUSB2<*1>: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 -- OK Found a modem on /dev/ttyUSB2. Modem configuration written to /etc/wvdial.conf. ttyUSB2<Info>: Speed 9600; init "ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0" output of wvdial: #wvdial --> WvDial: Internet dialer version 1.60 --> Cannot get information for serial port. --> Initializing modem. --> Sending: ATZ ATZ OK --> Sending: ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 ATQ0 V1 E1 S0=0 &C1 &D2 +FCLASS=0 OK --> Sending: AT+CRM=1 AT+CRM=1 OK --> Modem initialized. --> Sending: ATDT#777 --> Waiting for carrier. ATDT#777 CONNECT --> Carrier detected. Starting PPP immediately. --> Starting pppd at Sat Oct 16 15:30:47 2010 --> Pid of pppd: 5681 --> Using interface ppp0 --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> pppd: (u;[08]@s;[08]`{;[08] --> local IP address 14.96.147.104 --> pppd: (u;[08]@s;[08]`{;[08] --> remote IP address 172.29.161.223 --> pppd: (u;[08]@s;[08]`{;[08] --> primary DNS address 121.40.152.90 --> pppd: (u;[08]@s;[08]`{;[08] --> secondary DNS address 121.40.152.100 --> pppd: (u;[08]@s;[08]`{;[08] Output of log message /var/log/messages: Oct 16 15:29:44 avyakta-desktop pppd[5119]: secondary DNS address 121.242.190.180 Oct 16 15:29:58 desktop pppd[5119]: Terminating on signal 15 Oct 16 15:29:58 desktop pppd[5119]: Connect time 0.3 minutes. Oct 16 15:29:58 desktop pppd[5119]: Sent 0 bytes, received 177 bytes. Oct 16 15:29:58 desktop pppd[5119]: Connection terminated. Oct 16 15:30:47 desktop pppd[5681]: pppd 2.4.5 started by root, uid 0 Oct 16 15:30:47 desktop pppd[5681]: Using interface ppp0 Oct 16 15:30:47 desktop pppd[5681]: Connect: ppp0 <--> /dev/ttyUSB2 Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:47 desktop pppd[5681]: CHAP authentication succeeded Oct 16 15:30:48 desktop pppd[5681]: local IP address 14.96.147.104 Oct 16 15:30:48 desktop pppd[5681]: remote IP address 172.29.161.223 Oct 16 15:30:48 desktop pppd[5681]: primary DNS address 121.40.152.90 Oct 16 15:30:48 desktop pppd[5681]: secondary DNS address 121.40.152.100 EDIT 1 : I tried the following sudo stop network-manager sudo killall modem-manager sudo /usr/sbin/modem-manager --debug > ~/mm.log 2>&1 & sudo /usr/sbin/NetworkManager --no-daemon > ~/nm.log 2>&1 & Output of mm.log: #vim ~/mm.log: ** Message: Loaded plugin Option High-Speed ** Message: Loaded plugin Option ** Message: Loaded plugin Huawei ** Message: Loaded plugin Longcheer ** Message: Loaded plugin AnyData ** Message: Loaded plugin ZTE ** Message: Loaded plugin Ericsson MBM ** Message: Loaded plugin Sierra ** Message: Loaded plugin Generic ** Message: Loaded plugin Gobi ** Message: Loaded plugin Novatel ** Message: Loaded plugin Nokia ** Message: Loaded plugin MotoC Output of nm.log: #vim ~/nm.log: NetworkManager: <info> starting... NetworkManager: <info> modem-manager is now available NetworkManager: SCPlugin-Ifupdown: init! NetworkManager: SCPlugin-Ifupdown: update_system_hostname NetworkManager: SCPluginIfupdown: guessed connection type (eth0) = 802-3-ethernet NetworkManager: SCPlugin-Ifupdown: update_connection_setting_from_if_block: name:eth0, type:802-3-ethernet, id:Ifupdown (eth0), uuid: 681b428f-beaf-8932-dce4-678ed5bae28e NetworkManager: SCPlugin-Ifupdown: addresses count: 1 NetworkManager: SCPlugin-Ifupdown: No dns-nameserver configured in /etc/network/interfaces NetworkManager: nm-ifupdown-connection.c.119 - invalid connection read from /etc/network/interfaces: (1) addresses NetworkManager: SCPluginIfupdown: management mode: unmanaged NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1) NetworkManager: SCPlugin-Ifupdown: device added (path: /sys/devices/pci0000:00/0000:00:14.4/0000:02:02.0/net/eth1, iface: eth1): no ifupdown configuration found. NetworkManager: SCPlugin-Ifupdown: devices added (path: /sys/devices/virtual/net/lo, iface: lo) @

    Read the article

  • A deadlock was detected while trying to lock variables in SSIS

    Error: 0xC001405C at SQL Log Status: A deadlock was detected while trying to lock variables "User::RowCount" for read/write access. A lock cannot be acquired after 16 attempts. The locks timed out. Have you ever considered variable locking when building your SSIS packages? I expect many people haven’t just because most of the time you never see an error like the one above. I’ll try and explain a few key concepts about variable locking and hopefully you never will see that error. First of all, what is all this variable locking all about? Put simply SSIS variables have to be locked before they can be accessed, and then of course unlocked once you have finished with them. This is baked into SSIS, presumably to reduce the risk of race conditions, but with that comes some additional overhead in that you need to be careful to avoid lock conflicts in some scenarios. The most obvious place you will come across any hint of locking (no pun intended) is the Script Task or Script Component with their ReadOnlyVariables and ReadWriteVariables properties. These two properties allow you to enter lists of variables to be used within the task, or to put it another way, these lists of variables to be locked, so that they are available within the task. During the task pre-execute phase the variables and locked, you then use them during the execute phase when you code is run, and then unlocked for you during the post-execute phase. So by entering the variable names in one of the two list, the locking is taken care of for you, and you just read and write to the Dts.Variables collection that is exposed in the task for the purpose. As you can see in the image above, the variable PackageInt is specified, which means when I write the code inside that task I don’t have to worry about locking at all, as shown below. public void Main() { // Set the variable value to something new Dts.Variables["PackageInt"].Value = 199; // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } As you can see as well as accessing the variable, hassle free, I also raise an event. Now consider a scenario where I have an event hander as well as shown below. Now what if my event handler uses tries to use the same variable as well? Well obviously for the point of this post, it fails with the error quoted previously. The reason why is clearly illustrated if you consider the following sequence of events. Package execution starts Script Task in Control Flow starts Script Task in Control Flow locks the PackageInt variable as specified in the ReadWriteVariables property Script Task in Control Flow executes script, and the On Information event is raised The On Information event handler starts Script Task in On Information event handler starts Script Task in On Information event handler attempts to lock the PackageInt variable (for either read or write it doesn’t matter), but will fail because the variable is already locked. The problem is caused by the event handler task trying to use a variable that is already locked by the task in Control Flow. Events are always raised synchronously, therefore the task in Control Flow that is raising the event will not regain control until the event handler has completed, so we really do have un-resolvable locking conflict, better known as a deadlock. In this scenario we can easily resolve the problem by managing the variable locking explicitly in code, so no need to specify anything for the ReadOnlyVariables and ReadWriteVariables properties. public void Main() { // Set the variable value to something new, with explicit lock control Variables lockedVariables = null; Dts.VariableDispenser.LockOneForWrite("PackageInt", ref lockedVariables); lockedVariables["PackageInt"].Value = 199; lockedVariables.Unlock(); // Raise an event so we can play in the event handler bool fireAgain = true; Dts.Events.FireInformation(0, "Script Task Code", "This is the script task raising an event.", null, 0, ref fireAgain); Dts.TaskResult = (int)ScriptResults.Success; } Now the package will execute successfully because the variable lock has already been released by the time the event is raised, so no conflict occurs. For those of you with a SQL Engine background this should all sound strangely familiar, and boils down to getting in and out as fast as you can to reduce the risk of lock contention, be that SQL pages or SSIS variables. Unfortunately we cannot always manage the locking ourselves. The Execute SQL Task is very often used in conjunction with variables, either to pass in parameter values or get results out. Either way the task will manage the locking for you, and will fail when it cannot lock the variables it requires. The scenario outlined above is clear cut deadlock scenario, both parties are waiting on each other, so it is un-resolvable. The mechanism used within SSIS isn’t actually that clever, and whilst the message says it is a deadlock, it really just means it tried a few times, and then gave up. The last part of the error message is actually the most accurate in terms of the failure, A lock cannot be acquired after 16 attempts. The locks timed out.  Now this may come across as a recommendation to always manage locking manually in the Script Task or Script Component yourself, but I think that would be an overreaction. It is more of a reminder to be aware that in high concurrency scenarios, especially when sharing variables across multiple objects, locking is important design consideration. Update – Make sure you don’t try and use explicit locking as well as leaving the variable names in the ReadOnlyVariables and ReadWriteVariables lock lists otherwise you’ll get the deadlock error, you cannot lock a variable twice!

    Read the article

  • When does "proper" programming no longer matter?

    - by Kai Qing
    I've been a full time programmer for about 8 years now. Web based mostly, ranging in weird jobs for clients. Never anything I "want" to do. So my experience is limited to what I've been contracted to do, having no real incentive to master anything in particular. So here's my scenario and ultimately what I wonder about... I've been building an android game in my spare time. It's using the libgdx library so quite a bit of the heavy lifting is done for me. I don't read much of the docs cause unless it's in tutorial format I will just not care, and ultimately most of my questions have already been asked on stackoverflow. I get along fine and my game works as expected... Suspiciously well, even. So much so that I wonder why one should bother to be "proper" when coding if the end result is ultimately the same. To be more specific, I used a hashtable because I wanted something close to an associative array. Human readable key values. In other places to achieve similar things, I use a vector. I know libgdx has vector2 and vector3 classes, but I've never used them. When I come across weird problems and search stackoverflow for help, I see a lot of people just reaming the questions that use a certain datatype when another one is technically "proper." Like using an ArrayList because it does not require defined bounds versus re-defining an int[] with new known boundaries. Or even something trivial like this: for(int i = 0; i < items.length; i ++) { // do something } I know it evaluates item.length on every iteration. I just don't care. I know items will never be more than 15 to 20 items. So why bother caring if I evaluate items.length on every iteration? So I wonder - why does everyone get all up in arms over this? Who cares if I use a less efficient datatype to get the job done? I ran some tests to see how the app performs using the lazy, get it done fast and don't look back method I just described versus the proper, follow the tutorial and use the exact data types suggested by the community. The results: Same thing. Average 45 fps. I opened every app on the phone and galaxy tab. Same deal. No difference. My game is pretty graphic intensive. It's not like it's just a simple thing. I expected it to perform kind of badly since I don't care to optimize image assets or... well, you probably get the idea. I'm making the game for fun. As a joke, really. But in doing so I'm working outside the normal scope of my job, which is to always follow the rules and do it the right way. So to say, I am without bounds here and this has caused me to wonder why I ever really care to be "proper" So I guess my question to you is this: Is there a threshold when it no longer matters to be proper? Is there a lasting, longer term consequence to the lazy, get it done and don't look back route? Is it ok to say - "so long as it gets the job done, I don't care?" Disclaimer: When I program my game, I am almost always drunk. I do it to remember why I got into this stuff to begin with because the monotony of client based web work will make you hate being a programmer. I'm having a blast and my game is not crashing, tests well, performs well, looks good on all devices so far and has no noticeable negative impact on any of my testing devices. I expected failure because I was being so drunkenly careless with my code, but to my surprise, it had no noticeable impact. I am now starting to question the need to be careful. Help me regain the ability to care! ... or explain why it's not a bad thing to not care. Secondary disclaimer: I am aware of the benefits of maintainability. For myself and others. Agreed. But it's not like someone happening across my inefficient int[] loop won't know what it does. As an experienced programmer those kinds of things are just clear on sight. I document the complex stuff for myself knowing I was drunk and will probably need a reminder. Those notes would clarify any confusion for someone who might ever gaze upon my ridiculous game - though the reality is that either I maintain it myself or it fades into time. I'm ok with that. But if it doesn't slow the device down, or crash, then crossing the t's and dotting the i's might actually require more time than it's worth.

    Read the article

< Previous Page | 314 315 316 317 318 319 320 321 322 323 324 325  | Next Page >