Search Results

Search found 19061 results on 763 pages for 'load factor'.

Page 684/763 | < Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >

  • Silverlight Cream for March 10, 2011 -- #1058

    - by Dave Campbell
    In this Issue: Ian T. Lackey, Peter Kuhn, WindowsPhoneGeek(-2-), Jesse Liberty(-2-), Martin Krüger, John Papa, Jeremy Likness, Karl Shifflett, and Colin Eberhardt. Above the Fold: Silverlight: "Silverlight TV 65: 3D Graphics" John Papa WP7: "Developing a Windows Phone 7 Jump List Control" Colin Eberhardt Shoutouts: Telerik announced a special sale on their RadControls for WP7... check it out: RadControls for Windows Phone 7 - on Sale from March 16th at a Special Promo Price! From SilverlightCream.com: Prism BootStrapper Load ModuleCatalog Ansyc Ian T. Lackey has a post up about reading the module catalog for Prism from an XML file asynchronously... fun stuff... this is how we kick-started our app... XNA for Silverlight developers: Part 6 - Input (accelerometer) Peter Kuhn has Part 6 of his XNA for Silverlight devs up at SilverlightShow. This post is on the use of the accelerometer... some great diagrams and explanations of it's use along with some code to play with... including a 'problems and pitfalls' section, and some good external links. Getting Started with Unit Testing in Silverlight for WP7 WindowsPhoneGeek has an introduction to Unit Testing in general, and then moves into Unit Testing in Silverlight for WP7, providing 3 options with links to the materials and code demonstrating the concepts. Using DockPanel in WP7 Responding to reader's questions, WindowsPhoneGeek's next post is on the DockPanel from the Silverlight Toolkit, and using it in WP7... defined declaratively and in code. Reactive Extensions–More About Chaining Jesse Liberty has post number 10 on Rx up and is a follow-on to the last one on Chaining. This time he exercises the chaining aspect of SelectMany. Yet Another Podcast #26–Walt Ritscher In his next post, Jesse Liberty has his 26th 'Yet Another Podcast' up and is chatting with my friend Walt Ritscher. If you don't know who Walt is, check out the links Jesse has on the post... I'm sure you've crossed paths. How to: Create A half square from a regular polygon (triangle) Martin Krüger demonstrates the exact placement of a half-square (isosceles right triangle), formed with a regular polygon in Blend... this is much more involved than I've made it sound... check out his post. Silverlight TV 65: 3D Graphics John Papa has Silverlight TV number 65 up and it's all about the 3D graphics stuff we saw at the Firestarter. John is talking with Danny Riddel, the CEO of Archetype, the company that built the awesome 3D demo we all gushed over. Jounce Part 12: Providing History-Based Back Navigation Jeremy Likness has part 12 of his Jounce exploration up... and discussing the stack of navigated pages that Jounce retains and providing a 'go back' functionality... and provides a good example of using it all. Prism 4 Region Navigation with Silverlight Frame Navigation and Unity Karl Shifflett has a post for all us Prism afficianados... Prism, Unity, and the Silverlight Frame Navigation framework. Some great external links for 'required reading' too. Developing a Windows Phone 7 Jump List Control Colin Eberhardt has an awesome tutorial up for creating a JumpList control for WP7... what a bunch of effort... this is a step-by-step description of designing the control he built and blogged about a while back... and it's still cool! Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • 30 Steps to Master ASP.NET MVC Application development

    - by Rajesh Pillai
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 st1\:*{behavior:url(#ieooui) } /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Times New Roman","serif";} Welcome Readers!,   I am starting out a new series on ASP.NET  MVC skill building which will be posted over the next couple of weeks.  Let me know your thoughts on the content, which I have planned and a couple of them has been taken from ASP.NET MVC2 Cookbook. (NOTE: Only the heading has been taken, the content will be not :)).   Do let me know what you would like to see, or any additional inputs or ideas to cover in this topics.  The 30 steps are oultined below for quick reference.  Will start filling this out quickly.   Outlined is the ‘30’ step to master ASP.NET MVC.   A Peek Into Model What is a model? Different types of model Presentation/ViewModel Model Mapping (AutoMapper)   A Peak into View How view works in ASP.NET MVC? View Engine Design Custom View Engine View Best Practices Templated Helpers Partial Views   A Peak into Controller Introduction Controller Design Controller Best Practices Asynchronous Controller Custom Action Result Action Filters Controller Factory to use with IOC   Routes Explanation Routes from the database Routes from XML More complex routing   Master Pages Basics Setting Master Page Dynamically   Working with data in the view Repeating Views Array of check boxes Array of radio buttons Paged data CRUD Client side action Confirmation Dialog (modal window) jqGrid   Working with Forms   Validation Model Validation with DataAnnotations Using the xVal validation framework Client side validation with jQuery Validation Fluent Validation Model Binders   Templating Create strongly typed helper using T4 Custom View Templates with T4 Create custom MVC project template using T4   IOC AutoFac Ninject Unity Application   Areas   jQuery, Ajax and jQuery Plugins   State Maintenance Application State User state Cookies Webfarm   Error Handling View error handling Controller error handling ELMAH (Error Logging Modules and Handlers)   Authentication and Authorization User Registration form SignOn Process Password Reminder Membership and Roles Windows authentication Restricting access to all pages Restricting access to selected pages Restricting access to pages by role Restricting access to a controller Restricting access to selected area   Profiles and Themes Using Profiles Inheriting a Profile Migrating an anonymous profile Creating custom themes Using themes User personalized themes   Configuration Adding custom application settings in web.config Displaying custom error messages Accessing other web.config configuration elements Adding custom configuration elements to web.config Encrypting web.config sections   Tracing, Debugging and Logging   Caching Caching a whole page Caching pages based on route details Caching pages based on browser type and version Caching pages based custom strings Caching partial pages Caching application data Object Caching Using Microsoft Velocity Using MemCache Using AppFabric cache   Localization   HTTP Handlers and Modules   Security XSS/CSRF AnitForgery Encoding   HtmlHelpers Strongly typed helpers Writing custom helpers   Repository Pattern (Data access)   WF/WCF   Unit Testing   Mocking Framework   Integration Testing   Load / Performance Testing   Deployment    Once again let me know your thoughts on this.   Till then, Enjoy MVC'ing!!!

    Read the article

  • MacGyver Moments

    - by Geoff N. Hiten
    Denny Cherry tagged me to write about my best MacGyver Moment.  Usually I ignore blogosphere fluff and just use this space to write what I think is important.  However, #MVP10 just ended and I have a stronger sense of community.  Besides, where else would I mention my second best Macgyver moment was making a BIOS jumper out of a soda can.  Aluminum is conductive and I didn't have any real jumpers lying around. My best moment is probably my entire home computer network.  Every system but one is hand-built, usually cobbled together out of spare parts and 'adapted' from its original purpose. My Primary Domain Controller is a Dell 2300.   The Service Tag indicates it was shipped to the original owner in 1999.  Box has a PERC/1 RAID controller.  I acquired this from a previous employer for $50.  It runs Windows Server 2003 Enterprise Edition.  Does DNS, DHCP, and RADIUS services as a bonus.  RADIUS authentication is used for VPN and Wireless access.  It is nice to sign in once and be done with it. The Secondary Domain Controller is an old desktop.  Dual P-III 933 with some extra drives. My VPN box is a P-II 250 with 384MB of RAM and a 21 GB hard drive.  I did a P-to-V to my Hyper-V box a year or so ago and retired the hardware again.  Dynamic DNS lets me connect no matter how often Comcast shuffles my IP. The Hyper-V box is a desktop system with 8GB RAM and an AMD Athlon 5000+ processor.  Cost me less than $500 to put together nearly two years ago.  I reasoned that if Vista and Windows 2008 were the same code then Vista 64-bit certified meant the drivers for Vista would load into Windows 2008.  Turns out I was right. Later I added three 1TB drives but wasn't too happy with how that turned out.  I recovered two of the drives yesterday and am building an iSCSI storage unit. (Much thanks to Starwind.  Great product).  I am using an old AMD 1.1GhZ box with 1.5 GB RAM (cobbled together from three old PCs) as my storave server.  The Hyper-V box is slated for an OS rebuild to 2008 R2 once I get the storage system worked out.  maybe in a week or two. A couple of DLink Gigabit switches ties everything together. Add in the Vonage box, the three PCs, the Wireless-N Access Point, the two notebooks and the XBox and you have gone from MacGyver to darn near Rube Goldberg. The only thing I really spend money on is power supplies and fans.  I buy top-of-the-line for both. I even pull and crimp my own cables. Oh, and if my kids hose up a PC, I have all of their data on a server elsewhere.  Every PC and laptop is pretty much interchangable for email and basic workstation tasks.  That helps a lot too. Of course I will tag SQLVariant.

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Oredev 2011 Trip Report

    - by arungupta
    Oredev had its seventh annual conference in the city of Malmo, Sweden last week. The name "Oredev" signifies to the part that Malmo is connected with Copenhagen with Oresund bridge. There were about 1000 attendees with several speakers from all over the world. The first two days were hands-on workshops and the next three days were sessions. There were different tracks such as Java, Windows 8, .NET, Smart Phones, Architecture, Collaboration, and Entrepreneurship. And then there was Xtra(ck) which had interesting sessions not directly related to technology. I gave two slide-free talks in the Java track. The first one showed how to build an end-to-end Java EE 6 application using NetBeans and GlassFish. The complete instructions to build the application are explained in detail here. This 3-tier application used Java Persistence API, Enterprsie Java Beans, Servlet, Contexts and Dependency Injection, JavaServer Faces, and Java API for RESTful Services. The source code built during the application can be downloaded here (LINK TBD). The second session, slide-free again, showed how to take a Java EE 6 application into production using GlassFish cluster. It explained: Create a 2-instance GlassFish cluster Front-end with a Web server and a load balancer Demonstrate session replication and fail over Monitor the application using JavaScript The complete instructions for this session are available here. Oredev has an interesting way of collecting attendee feedback. The attendees drop a green, yellow, or red card in a bucket as they walk out of the session. Not everybody votes but most do. Other than the instantaneous feedback provided on twitter, this mechanism provides a more coarse grained feedback loop as well. The first talk had about 67 attendees (with 23 green and 7 yellow) and the second one had 22 (11 green and 11 yellow). The speakers' dinner is a good highlight of the conference. It is arranged in the historic city hall and the mayor welcomed all the speakers. As you can see in the pictures, it is a very royal building with lots of history behind it. Fortunately the dinner was a buffet with a much better variety unlike last year where only black soup and geese were served, which was quite cultural BTW ;-) The sauna in 85F, skinny dipping in 35F ocean and alternating between them at Kallbadhus is always very Swedish. Also spent a short evening at a friend's house socializing with other speaker/attendees, drinking Glogg, and eating Pepperkakor.  The welcome packet at the hotel also included cinnamon rolls, recommended to drink with cold milk, for a little more taste of Swedish culture. Something different at this conference was how artists from Image Think were visually capturing all the keynote speakers using images on whiteboards. Here are the images captured for Alexis Ohanian (Reddit co-founder and now running Hipmunk): Unfortunately I could not spend much time engaging with other speakers or attendees because was busy preparing a new hands-on lab material. But was able to spend some time with Matthew Mccullough, Micahel Tiberg, Magnus Martensson, Mattias Karlsson, Corey Haines, Patrick Kua, Charles Nutter, Tushara, Pradeep, Shmuel, and several other folks. Here are a few pictures captured from the event: And the complete album here: Thank you Matthias, Emily, and Kathy for putting up a great show and giving me an opportunity to speak at Oredev. I hope to be back next year with a more vibrant representation of Java - the language and the ecosystem!

    Read the article

  • Big Data – Basics of Big Data Analytics – Day 18 of 21

    - by Pinal Dave
    In yesterday’s blog post we learned the importance of the various components in Big Data Story. In this article we will understand what are the various analytics tasks we try to achieve with the Big Data and the list of the important tools in Big Data Story. When you have plenty of the data around you what is the first thing which comes to your mind? “What do all these data means?” Exactly – the same thought comes to my mind as well. I always wanted to know what all the data means and what meaningful information I can receive out of it. Most of the Big Data projects are built to retrieve various intelligence all this data contains within it. Let us take example of Facebook. When I look at my friends list of Facebook, I always want to ask many questions such as - On which date my maximum friends have a birthday? What is the most favorite film of my most of the friends so I can talk about it and engage them? What is the most liked placed to travel my friends? Which is the most disliked cousin for my friends in India and USA so when they travel, I do not take them there. There are many more questions I can think of. This illustrates that how important it is to have analysis of Big Data. Here are few of the kind of analysis listed which you can use with Big Data. Slicing and Dicing: This means breaking down your data into smaller set and understanding them one set at a time. This also helps to present various information in a variety of different user digestible ways. For example if you have data related to movies, you can use different slide and dice data in various formats like actors, movie length etc. Real Time Monitoring: This is very crucial in social media when there are any events happening and you wanted to measure the impact at the time when the event is happening. For example, if you are using twitter when there is a football match, you can watch what fans are talking about football match on twitter when the event is happening. Anomaly Predication and Modeling: If the business is running normal it is alright but if there are signs of trouble, everyone wants to know them early on the hand. Big Data analysis of various patterns can be very much helpful to predict future. Though it may not be always accurate but certain hints and signals can be very helpful. For example, lots of data can help conclude that if there is lots of rain it can increase the sell of umbrella. Text and Unstructured Data Analysis: unstructured data are now getting norm in the new world and they are a big part of the Big Data revolution. It is very important that we Extract, Transform and Load the unstructured data and make meaningful data out of it. For example, analysis of lots of images, one can predict that people like to use certain colors in certain months in their cloths. Big Data Analytics Solutions There are many different Big Data Analystics Solutions out in the market. It is impossible to list all of them so I will list a few of them over here. Tableau – This has to be one of the most popular visualization tools out in the big data market. SAS – A high performance analytics and infrastructure company IBM and Oracle – They have a range of tools for Big Data Analysis Tomorrow In tomorrow’s blog post we will discuss about very important components of the Big Data Ecosystem – Data Scientist. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • PASS Summit 2011 &ndash; Part IV

    - by Tara Kizer
    This is the final blog for my PASS Summit 2011 series.  Well okay, a mini-series, I guess. On the last day of the conference, I attended Keith Elmore’ and Boris Baryshnikov’s (both from Microsoft) “Introducing the Microsoft SQL Server Code Named “Denali” Performance Dashboard Reports, Jeremiah Peschka’s (blog|twitter) “Rewrite your T-SQL for Great Good!”, and Kimberly Tripp’s (blog|twitter) “Isolated Disasters in VLDBs”. Keith and Boris talked about the lifecycle of a session, figuring out the running time and the waiting time.  They pointed out the transient nature of the reports.  You could be drilling into it to uncover a problem, but the session may have ended by the time you’ve drilled all of the way down.  Also, the reports are for troubleshooting live problems and not historical ones.  You can use Management Data Warehouse for historical troubleshooting.  The reports provide similar benefits to the Activity Monitor, however Activity Monitor doesn’t provide context sensitive drill through. One thing I learned in Keith’s and Boris’ session was that the buffer cache hit ratio should really never be below 87% due to the read-ahead mechanism in SQL Server.  When a page is read, it will read the entire extent.  So for every page read, you get 7 more read.  If you need any of those 7 extra pages, well they are already in cache.  I had a lot of fun in Jeremiah’s session about refactoring code plus I learned a lot.  His slides were visually presented in a fun way, which just made for a more upbeat presentation.  Jeremiah says that before you start refactoring, you should look at your system.  Investigate missing or too many indexes, out-of-date statistics, and other areas that could be leading to your code running slow.  He talked about code standards.  He suggested using common abbreviations for aliases instead of one-letter aliases.  I’m a big offender of one-letter aliases, but he makes a good point.  He said that join order does not matter to the optimizer, but it does matter to those who have to read your code.  Now let’s get into refactoring! Eliminate useless things – useless/unneeded joins and columns.  If you don’t need it, get rid of it! Instead of using DISTINCT/JOIN, replace with EXISTS Simplify your conditions; use UNION or better yet UNION ALL instead of OR to avoid a scan and use indexes for each union query Branching logic – instead of IF this, IF that, and on and on…use dynamic SQL (sp_executesql, please!) or use a parameterized query in the application Correlated subqueries – YUCK! Replace with a join Eliminate repeated patterns Last, but certainly not least, was Kimberly’s session.  Kimberly is my favorite speaker.  I attended her two-day pre-conference seminar at PASS Summit 2005 as well as a SQL Immersion Event last December.  Did I mention she’s my favorite speaker?  Okay, enough of that. Kimberly’s session was packed with demos.  I had seen some of it in the SQL Immersion Event, but it was very nice to get a refresher on these, especially since I’ve got a VLDB with some growing pains.  One key takeaway from her session is the idea to use a log shipping solution with a load delay, such as 6, 8, or 24 hours behind the primary.  In the case of say an accidentally dropped table in a VLDB, we could retrieve it from the secondary database rather than waiting an eternity for a restore to complete.  Kimberly let us know that in SQL Server 2012 (it finally has a name!), online rebuilds are supported even if there are LOB columns in your table.  This will simplify custom code that intelligently figures out if an online rebuild is possible. There was actually one last time slot for sessions that day, but I had an airplane to catch and my kids to see!

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • Praise for Europe's Smart Metering & Conservation Efforts

    - by caroline.yu
    Recently, a writer at the Home Energy Team praised the UK for its efforts towards smart metering and energy conservation, with an article entitled UK Blazing A Trail With Smart Metering At Home? The article highlighted that the Department of Energy and Climate Change has announced that smart metering will be introduced in the next decade and that all UK households will have smart meters by the year 2020. In fact, the UK is not the only country striving to achieve carbon reduction targets, as many of its European counterparts have begun to take positive steps towards tackling the issue of energy conservation by implementing innovative new metering and billing technologies as well as promoting alternative energy solutions, such as wind and solar power. Since 1997, the states of the European Union, including France, Germany and Spain, have been working towards achieving a target of 12 percent renewable energy electricity by 2010. Germany in particular has made a significant achievement so far, having surpassed the target early in 2007. This success is largely due to the German Renewable Energy Act (EEG), which promoted the use of renewable energy. Recently, analysis from the European Wind Energy Association (EWEA) found that 21 of the EU Member States are meeting or exceeding their national target to achieve 20 percent renewable energy by 2020. However, six states - Belgium, Italy, Luxembourg, Malta, Bulgaria and Denmark - say they will not manage to reach their target through domestic action alone. Bulgaria and Denmark believe that with fresh national initiatives they could meet or exceed their targets, but others, including Italy, may need to import renewable energy from neighboring non-EU countries. Top achievers, according to the EWEA report, are Spain, which believes its renewable energy will reach 22.7 percent by 2020, as well as Germany, Estonia, Greece, Ireland, Poland, Slovakia and Sweden, who will all exceed their targets. "Importantly, the way that this renewable energy is controlled and distributed must be addressed in order to ensure its success," said Bastian Fischer, vice president and general manager EMEA, Oracle Utilities. "A smart gird infrastructure can enable utilities to deal with load distribution in times of increased need and ensure power is always available from these means. A smart grid also underpins the success of metering and billing technologies, such as smart metering, and allows utilities to deal with increased usage data and provide accurate billing." Outside of Europe, Australia has made significant steps towards improving water conservation. The Australian Department of Sustainability and Environment took some of the recent advancements made in the energy sector, including new metering and billing solutions, and applied them to the water industry, enhancing customer service and reducing consumption as a result. The adoption of smart metering in Europe is mainly driven by regulation, but significant technological improvements are being made the world over to change the way we use all kinds of energy. However, the developing markets are lagging behind. One of the primary reasons for this is the lack of infrastructure in place to use as a foundation for setting up energy-saving solutions, which is slowing the adoption of technologies such as smart meters. However, these countries do benefit from fewer outdated infrastructure and legacy systems, which is often cited by others as a difficult barrier to deploying new solutions. As a result, some countries should find new technologies easier to implement and adapt to in the immediate future, without this roadblock.

    Read the article

  • maintaining a growing, diverse codebase with continuous integration

    - by Nate
    I am in need of some help with philosophy and design of a continuous integration setup. Our current CI setup uses buildbot. When I started out designing it, I inherited (well, not strictly, as I was involved in its design a year earlier) a bespoke CI builder that was tailored to run the entire build at once, overnight. After a while, we decided that this was insufficient, and started exploring different CI frameworks, eventually choosing buildbot. One of my goals in transitioning to buildbot (besides getting to enjoy all the whiz-bang extras) was to overcome some of the inadequacies of our bespoke nightly builder. Humor me for a moment, and let me explain what I have inherited. The codebase for my company is almost 150 unique c++ Windows applications, each of which has dependencies on one or more of a dozen internal libraries (and many on 3rd party libraries as well). Some of these libraries are interdependent, and have depending applications that (while they have nothing to do with each other) have to be built with the same build of that library. Half of these applications and libraries are considered "legacy" and unportable, and must be built with several distinct configurations of the IBM compiler (for which I have written unique subclasses of Compile), and the other half are built with visual studio. The code for each compiler is stored in two separate Visual SourceSafe repositories (which I am simply handling using a bunch of ShellCommands, as there is no support for VSS). Our original nightly builder simply took down the source for everything, and built stuff in a certain order. There was no way to build only a single application, or pick a revision, or to group things. It would launched virtual machines to build a number of the applications. It wasn't very robust, it wasn't distributable. It wasn't terribly extensible. I wanted to be able to overcame all of these limitations in buildbot. The way I did this originally was to create entries for each of the applications we wanted to build (all 150ish of them), then create triggered schedulers that could build various applications as groups, and then subsume those groups under an overall nightly build scheduler. These could run on dedicated slaves (no more virtual machine chicanery), and if I wanted I could simply add new slaves. Now, if we want to do a full build out of schedule, it's one click, but we can also build just one application should we so desire. There are four weaknesses of this approach, however. One is our source tree's complex web of dependencies. In order to simplify config maintenace, all builders are generated from a large dictionary. The dependencies are retrieved and built in a not-terribly robust fashion (namely, keying off of certain things in my build-target dictionary). The second is that each build has between 15 and 21 build steps, which is hard to browse and look at in the web interface, and since there are around 150 columns, takes forever to load (think from 30 seconds to multiple minutes). Thirdly, we no longer have autodiscovery of build targets (although, as much as one of my coworkers harps on me about this, I don't see what it got us in the first place). Finally, aformentioned coworker likes to constantly bring up the fact that we can no longer perform a full build on our local machine (though I never saw what that got us, either, considering that it took three times as long as the distributed build; I think he is just paranoically phobic of ever breaking the build). Now, moving to new development, we are starting to use g++ and subversion (not porting the old repository, mind you - just for the new stuff). Also, we are starting to do more unit testing ("more" might give the wrong picture... it's more like any), and integration testing (using python). I'm having a hard time figuring out how to fit these into my existing configuration. So, where have I gone wrong philosophically here? How can I best proceed forward (with buildbot - it's the only piece of the puzzle I have license to work on) so that my configuration is actually maintainable? How do I address some of my design's weaknesses? What really works in terms of CI strategies for large, (possibly over-)complex codebases?

    Read the article

  • F1 Pit Pragmatics

    - by mikef
    "I hate computers. No, really, I hate them. I love the communications they facilitate, I love the conveniences they provide to my life. but I actually hate the computers themselves." - Scott Merrill, 'I hate computers: confessions of a Sysadmin' If Scott's goal was to polarize opinion and trigger raging arguments over the 'real reasons why computers suck', then he certainly succeeded. Impassioned vitriol sits side-by-side with rational debate. Yet Scott's fundamental point is absolutely on the money - Computers are a means to an end. The IT industry is finally starting to put weight behind the notion that good User Experience is an absolutely crucial goal, a cause championed by the likes of Microsoft's Bill Buxton, and which Apple's increasingly ubiquitous touch screen interface exemplifies. However, that doesn't change the fact that, occasionally, you just have to man up and deal with complex systems. In fact, sometimes you just need to sacrifice everything else in the name of performance. You'll find a perfect example of this Faustian bargain in Trevor Clarke's fascinating look into the (diabolical) IT infrastructure of modern F1 racing - high performance, high availability. high everything. To paraphrase, each car has up to 100 sensors, transmitting around 30Gb of data over the course of a race (70% in real-time). This data is then processed by no less than 3 servers (per car) so that the engineers in the pit have access to telemetry, strategy information, timing feeds, a connection back to the operations room in the team's home base - the list goes on. All of this while the servers are exposed "to carbon dust, oil, vibration, rain, heat, [and] variable power". Now, this is admittedly an extreme context where there's no real choice but to use complex systems where ease-of-use is, at best, a secondary concern. The flip-side is seen in small-scale personal computing such as that seen in Apple's iDevices, which are incredibly intuitive but limited in their scope. In terms of what kinds of systems they prefer to use, I suspect that most SysAdmins find themselves somewhere along this axis of Power vs. Usability, and which end of this axis you resonate with also hints at where you think the IT industry should focus its energy. Do you see yourself in the F1 pit, making split-second decisions, wrestling with information flows and reticent hardware to bend them to your will? If so, I imagine you feel that computers are subtle tools which need to be tuned and honed, using the advanced knowledge possessed only by responsible SysAdmins (If you have an iPhone, I suspect it's jail-broken). If the machines throw enigmatic errors, it's the price of flexibility and raw power. Alternatively, would you prefer to have your role more accessible, with users empowered by knowledge, spreading the load of managing IT environments? In that case, then you want hardware and software to have User Experience as their primary focus, and are of the "means to an end" school of thought (you're probably also fed up with users not listening to you when you try and help). At its heart, the dichotomy is between raw power (which might be difficult to use) and ease-of-use (which might have some limitations, but you can be up and running immediately). Of course, the ultimate goal is a fusion of flexibility, power and usability all in one system. It's achievable in specific software environments, and Red Gate considers it a target worth aiming for, but in other cases it's a goal right up there with cold fusion. I think it'll be a long time before we see it become ubiquitous. In the meantime, are you Power-Hungry or a Champion of Usability? Cheers, Michael Francis Simple Talk SysAdmin Editor

    Read the article

  • Oracle WebLogic Server and Oracle Database: A Robust Infrastructure for your Applications

    - by Ruma Sanyal
    Normal 0 false false false EN-US X-NONE X-NONE MicrosoftInternetExplorer4 It has been said that a chain is as strong as its weakest link. Well, this is also true for your application infrastructure. Not only are the various components that constitute your infrastructure, like database and application server critical, the integration between these things [whether coming out of the box from your vendor or done in-house] is paramount. Imagine your database being down and your application server not knowing about it and as a result your application waiting indefinitely for a database response – not a great situation if high availability is critical to your application. Or one of your database nodes is very busy, but your application server doesn’t have the intelligence to decipher that – it keeps pinging the busy node when it can in fact get a response from another idle node much faster. This is what Oracle WebLogic and Database integration provides: Intelligent integration out of the box. Tight integration between Oracle WebLogic and Database makes your infrastructure robust enough that not only does each of your infrastructure component provide you with improved RASP [reliability availability, scalability, and performance] but these components work together to offer improved performance & availability, better resource sharing, inherent scalability, ease of configuration and automated management for your entire infrastructure. Oracle WebLogic Server is the only application server with this degree of integration to Oracle Database. With Oracle WebLogic Server 11g, we introduced Active GridLink for Real Application Clusters (RAC). In conjunction with Oracle Database, this powerful software technology simplifies management, increases availability, and ensures fast connection failover with runtime connection, load balancing and affinity capabilities. With the release of Oracle Database 12c this summer, even tighter integration between Oracle WebLogic Server 12c (12.1.2) and Oracle Database 12c has been achieved and this further optimizes the integration for a global cloud environment. Read about these capabilities in detail in the Oracle WebLogic-Database Integration Whitepaper. Get in depth ‘how-to’ details from this YouTube video on the topic from our resident expert, Monica Roccelli. /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:11.0pt; font-family:"Calibri","sans-serif"; mso-ascii-font-family:Calibri; mso-ascii-theme-font:minor-latin; mso-fareast-font-family:"Times New Roman"; mso-fareast-theme-font:minor-fareast; mso-hansi-font-family:Calibri; mso-hansi-theme-font:minor-latin; mso-bidi-font-family:"Times New Roman"; mso-bidi-theme-font:minor-bidi;}

    Read the article

  • Acr.ExtDirect &ndash; Part 1 &ndash; Method Resolvers

    - by Allan Ritchie
    One of the most important things of any open source libraries in my opinion is to be as open as possible while avoiding having your library become invasive to your code/business model design.  I personally could never stand marking my business and/or data access code with attributes everywhere.  XML also isn’t really a fav with too many people these days since it comes with a startup performance hit and requires runtime compiling.  I find that there is a whole ton of communication libraries out there currently requiring this (ie. WCF, RIA, etc).  Even though Acr.ExtDirect comes with its own set of attributes, you can piggy-back the [ServiceContract] & [OperationContract] attributes from WCF if you choose.  It goes beyond that though, there are 2 others “out-of-the-box” implementations – Convention based & XML Configuration.    Convention – I don’t actually recommend using this one since it opens up all of your public instance methods to remote execution calls. XML Configuration – This isn’t so bad but requires you enter all of your methods and there operation types into the Castle XML configuration & as I said earlier, XML isn’t the fav these days.   So what are your options if you don’t like attributes, convention, or XML Configuration?  Well, Acr.ExtDirect has its own extension base to give the API a list of methods and components to make available for remote execution.  1: public interface IDirectMethodResolver { 2:   3: bool IsServiceType(ComponentModel model, Type type); 4: string GetNamespace(ComponentModel model); 5: string[] GetDirectMethodNames(ComponentModel model); 6: DirectMethodType GetMethodType(ComponentModel model, MethodInfo method); 7: }   Now to implement our own method resolver:   1: public class TestResolver : IDirectMethodResolver { 2:   3: #region IDirectMethodResolver Members 4:   5: /// <summary> 6: /// Determine if you are calling a service 7: /// </summary> 8: /// <param name="model"></param> 9: /// <param name="type"></param> 10: /// <returns></returns> 11: public bool IsServiceType(ComponentModel model, Type type) { 12: return (type.Namespace == "MyBLL.Data"); 13: } 14:   15: /// <summary> 16: /// Return the calling name for the client side 17: /// </summary> 18: /// <param name="model"></param> 19: /// <returns></returns> 20: public string GetNamespace(ComponentModel model) { 21: return model.Name; 22: } 23:   24: public string[] GetDirectMethodNames(ComponentModel model) { 25: switch (model.Name) { 26: case "Products" : 27: return new [] { 28: "GetProducts", 29: "LoadProduct", 30: "Save", 31: "Update" 32: }; 33:   34: case "Categories" : 35: return new [] { 36: "GetProducts" 37: }; 38:   39: default : 40: throw new ArgumentException("Invalid type"); 41: } 42: } 43:   44: public DirectMethodType GetMethodType(ComponentModel model, MethodInfo method) { 45: if (method.Name.StartsWith("Save") || method.Name.StartsWith("Update")) 46: return DirectMethodType.FormSubmit; 47: 48: else if (method.Name.StartsWith("Load")) 49: return DirectMethodType.FormLoad; 50:   51: else 52: return DirectMethodType.Direct; 53: } 54:   55: #endregion 56: }   And there you have it, your own custom method resolver.  Pretty easy and pretty open ended!

    Read the article

  • SQL SERVER – Select the Most Optimal Backup Methods for Server

    - by pinaldave
    Backup and Restore are very interesting concepts and one should be very much with the concept if you are dealing with production database. One never knows when a natural disaster or user error will surface and the first thing everybody wants is to get back on point in time when things were all fine. Well, in this article I have attempted to answer a few of the common questions related to Backup methodology. How to Select a SQL Server Backup Type In order to select a proper SQL Server backup type, a SQL Server administrator needs to understand the difference between the major backup types clearly. Since a picture is worth a thousand words, let me offer it to you below. Select a Recovery Model First The very first question that you should ask yourself is: Can I afford to lose at least a little (15 min, 1 hour, 1 day) worth of data? Resist the temptation to save it all as it comes with the overhead – majority of businesses outside finances can actually afford to lose a bit of data. If your answer is YES, I can afford to lose some data – select a SIMPLE (default) recovery model in the properties of your database, otherwise you need to select a FULL recovery model. The additional advantage of the Full recovery model is that it allows you to restore the data to a specific point in time vs to only last backup time in the Simple recovery model, but it exceeds the scope of this article Backups in SIMPLE Recovery Model In SIMPLE recovery model you can select to do just Full backups or Full + Differential. Full Backup This is the simplest type of backup that contains all information needed to restore the database and should be your first choice. It is often sufficient for small databases, but note that it makes a big impact on the performance of your database Full + Differential Backup After Full, Differential backup picks up all of the changes since the last Full backup. This means if you made Full, Diff, Diff backup – the last Diff backup contains all of the changes and you don’t need the previous Differential backup. Differential backup is obviously smaller and carries less performance overhead Backups in FULL Recovery Model In FULL recovery model you can select Full + Transaction Log or Full + Differential + Transaction Log backup. You have to create Transaction Log backup, because at that time the log is being truncated. Otherwise your Transaction Log will grow uncontrollably. Full + Transaction Log Backup You would always need to perform a Full backup first. Then a series of Transaction log backup. Note that (in contrast to Differential) you need ALL transactions to log since the last Full of Diff backup to properly restore. Transaction log backups have the smallest performance overhead and can be performed often. Full + Differential + Transaction Log Backup If you want to ease the performance overhead on your server, you can replace some of the Full backup in the previous scenario with Differential. You restore scenario would start from Full, then the Last Differential, then all of the remaining transactions log backups Typical backup Scenarios You may say “Well, it is all nice – give me the examples now”. As you may already know, my favorite SQL backup software is SQLBackupAndFTP. If you go to Advanced Backup Schedule form in this program and click “Load a typical backup plan…” link, it will give you these scenarios that I think are quite common – see the image below. The Simplest Way to Schedule SQL Backups I hate to repeat myself, but backup scheduling in SQL agent leaves a lot to be desired. I do not know the simple way to schedule your SQL server backups than in SQLBackupAndFTP – see the image below. The whole backup scheduling with compression, encryption and upload to a Network Folder / HDD / NAS Drive / FTP / Dropbox / Google Drive / Amazon S3 takes just a few minutes – see my previous post for the review. Final Words This post offered an explanation for major backup types only. For more complicated scenarios or to research other options as usually go to MSDN. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Backup and Restore, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – Partition Parallelism Support in expressor 3.6

    - by pinaldave
    I am very excited to learn that there is a new version of expressor’s data integration platform coming out in March of this year.  It will be version 3.6, and I look forward to using it and telling everyone about it.  Let me describe a little bit more about what will be so great in expressor 3.6: Greatly enhanced user interface Parallel Processing Bulk Artifact Upgrading The User Interface First let me cover the most obvious enhancements. The expressor Studio user interface (UI) has had some significant work done. Kudos to the expressor Engineering team; the entire UI is a visual masterpiece that is very responsive and intuitive. The improvements are more than just eye candy; they provide significant productivity gains when developing expressor Dataflows. Operator shape icons now include a description that identifies the function of each operator, instead of having to guess at the function by the icon. Operator shapes and highlighting depict the current function and status: Disabled, enabled, complete, incomplete, and error. Each status displays an appropriate message in the message panel with correction suggestions. Floating or docking property panels provide descriptive tool tips for each property as well as auto resize when adjusting the canvas, without having to search Help or the need to scroll around to get access to the property. Progress and status indicators let you know when an operation is working. “No limit” canvas with snap-to-grid allows automatic sizing and accurate positioning when you have numerous operators in the Dataflow. The inline tool bar offers quick access to pan, zoom, fit and overview functions. Selecting multiple artifacts with a right click context allows you to easily manage your workspace more efficiently. Partitioning and Parallel Processing Partitioning allows each operator to process multiple subsets of records in parallel as opposed to processing all records that flow through that operator in a single sequential set. This capability allows the user to configure the expressor Dataflow to run in a way that most efficiently utilizes the resources of the hardware where the Dataflow is running. Partitions can exist in most individual operators. Using partitions increases the speed of an expressor data integration application, therefore improving performance and load times. With the expressor 3.6 Enterprise Edition, expressor simplifies enabling parallel processing by adding intuitive partition settings that are easy to configure. Bulk Artifact Upgrading Bulk Artifact Upgrading sounds a bit intimidating, but it actually is not and it is a welcome addition to expressor Studio. In past releases, users were prompted to confirm that they wanted to upgrade their individual artifacts only when opened. This was a cumbersome and repetitive process. Now with bulk artifact upgrading, a user can easily select what artifact or group of artifacts to upgrade all at once. As you can see, there are many new features and upgrade options that will prove to make expressor Studio quicker and more efficient.  I hope I’m not the only one who is excited about all these new upgrades, and that I you try expressor and share your experience with me. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Technology

    Read the article

  • Unit Testing Framework for XQuery

    - by Knut Vatsendvik
    This posting provides a unit testing framework for XQuery using Oracle Service Bus. It allows you to write a test case to run your XQuery transformations in an automated fashion. When the test case is run, the framework returns any differences found in the response. The complete code sample with install instructions can be downloaded from here. Writing a Unit Test You start a new Test Case by creating a Proxy Service from Workshop that comes with Oracle Service Bus. In the General Configuration page select Service Type to be Messaging Service           In the Message Type Configuration page link both the Request & Response Message Type to the TestCase element of the UnitTest.xsd schema                 The TestCase element consists of the following child elements The ID and optional Name element is simply used for reference. The Transformation element is the XQuery resource to be executed. The Input elements represents the input to run the XQuery with. The Output element represents the expected output. These XML documents are “also” represented as an XQuery resource where the XQuery function takes no arguments and returns the XML document. Why not pass the test data with the TestCase? Passing an XML structure in another XML structure is not very easy or at least not very human readable. Therefore it was chosen to represent the test data as an loadable resource in the OSB. However you are free to go ahead with another approach on this if wanted. The XMLDiff elements represents any differences found. A sample on input is shown here. Modeling the Message Flow Then the next step is to model the message flow of the Proxy Service. In the Request Pipeline create a stage node that loads the test case input data.      For this, specify a dynamic XQuery expression that evaluates at runtime to the name of a pre-registered XQuery resource. The expression is of course set by the input data from the test case.           Add a Run stage node. Assign the result of the XQuery, that is to be run, to a context variable. Define a mapping for each of the input variables added in previous stage.     Add a Compare stage. Like with the input data, load the expected output data. Do a compare using XMLDiff XQuery provided where the first argument is the loaded output test data, and the second argument the result from the Run stage. Any differences found is replaced back into the test case XMLDiff element. In case of any unexpected failure while processing, add an Error Handler to the Pipeline to capture the fault. To pass back the result add the following Insert action In the Response Pipeline. A sample on output is shown here.

    Read the article

  • I got my z-5 Logitech speakers to work, but whenever I restart, I have to reconfigure them

    - by The Bill
    This is the content of my alsa-base.conf file (for some reason, the entries preceded by # are bolded--anyway): autoloader aliases install sound-slot-0 /sbin/modprobe snd-card-0 install sound-slot-1 /sbin/modprobe snd-card-1 install sound-slot-2 /sbin/modprobe snd-card-2 install sound-slot-3 /sbin/modprobe snd-card-3 install sound-slot-4 /sbin/modprobe snd-card-4 install sound-slot-5 /sbin/modprobe snd-card-5 install sound-slot-6 /sbin/modprobe snd-card-6 install sound-slot-7 /sbin/modprobe snd-card-7 Cause optional modules to be loaded above generic modules install snd /sbin/modprobe --ignore-install snd $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-ioctl32 ; /sbin/modprobe --quiet --use-blacklist snd-seq ; } # Workaround at bug #499695 (reverted in Ubuntu see LP #319505) install snd-pcm /sbin/modprobe --ignore-install snd-pcm $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-pcm-oss ; : ; } install snd-mixer /sbin/modprobe --ignore-install snd-mixer $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-mixer-oss ; : ; } install snd-seq /sbin/modprobe --ignore-install snd-seq $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; /sbin/modprobe --quiet --use-blacklist snd-seq-oss ; : ; } # install snd-rawmidi /sbin/modprobe --ignore-install snd-rawmidi $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq-midi ; : ; } Cause optional modules to be loaded above sound card driver modules install snd-emu10k1 /sbin/modprobe --ignore-install snd-emu10k1 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-emu10k1-synth ; } install snd-via82xx /sbin/modprobe --ignore-install snd-via82xx $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist snd-seq ; } Load saa7134-alsa instead of saa7134 (which gets dragged in by it anyway) install saa7134 /sbin/modprobe --ignore-install saa7134 $CMDLINE_OPTS && { /sbin/modprobe --quiet --use-blacklist saa7134-alsa ; : ; } Prevent abnormal drivers from grabbing index 0 options bt87x index=-2 options cx88_alsa index=-2 options saa7134-alsa index=-2 options snd-atiixp-modem index=-2 options snd-intel8x0m index=-2 options snd-via82xx-modem index=-2 options snd-usb-audio index=-2 options snd-usb-caiaq index=-2 options snd-usb-ua101 index=-2 options snd-usb-us122l index=-2 options snd-usb-usx2y index=-2 alias snd-card-0 snd-usb-audio alias snd-card-1 snd-hda-intel options snd-usb-audio index=0 options snd-hda-intel index=1 Ubuntu #62691, enable MPU for snd-cmipci options snd-cmipci mpu_port=0x330 fm_port=0x388 Keep snd-pcsp from being loaded as first soundcard options snd-pcsp index=-2 options snd-usb-audio index=-2 options snd-usb-audio index=0 alias snd-card-0 snd-usb-audio alias snd-card-1 snd-hda-intel options snd-hda-intel index=1 I deleted a line that said something like "#Keep usb-audio from being loaded as first soundcard" and that made the speakers work for the first time (before this, they never showed up). I also added the last four lines. Anyway, what can I add to this so that I don't have to reconfigure them each time I restart? Currently, I have to open Sound Settings, then under the hardware tab, select Analog Stereo Output, and then unplug my USB speakers and plug them back in. This makes them pop up so that I can see them. Otherwise, it will not show my Z-5 speakers as a device that can be configured.

    Read the article

  • Adding Output Caching and Expire Header in IIS7 to improve performance

    - by Renso
    The problem: Images and other static files will not be cached unless you tell it to. In IIS7 it is remarkably easy to do this. Web pages are becoming increasingly complex with more scripts, style sheets, images, and Flash on them. A first-time visit to a page may require several HTTP requests to load all the components. By using Expires headers these components become cacheable, which avoids unnecessary HTTP requests on subsequent page views. Expires headers are most often associated with images, but they can and should be used on all page components including scripts, style sheets, and Flash. Every time a page is loaded, every image and other static content like JavaScript files and CSS files will be reloaded on every page request. If the content does not change frequently why not cache it and avoid the network traffic?! The solution: In IIS7 there are two ways to cache content, using the web.config file to set caching for all static content, and in IIS7 itself setting aching by file extension that gives you that extra level of granularity. Web.config: In IIS7, Expires Headers can be enabled in the system.webServer section of the web.config file:   <staticContent>     <clientCache cacheControlMode="UseMaxAge" cacheControlMaxAge="1.00:00:00" />   </staticContent> In the above example a cache expiration of 1 day was added. It will be a full day before the content is downloaded from the web server again. To expire the content on a specific date:   <staticContent>     <clientCache cacheControlMode="UseExpires" httpExpires="Sun, 31 Dec 2011 23:59:59 UTC" />   </staticContent> This will expire the content on December 31st 2011 one second before midnight. Issues/Challenges: Once the file has been set to be cached it wont be updated on the user's browser for the set cache expiration. So be careful here with content that may change frequently, like during development. Typically in development you don't want to cache at all for testing purposes. You could also suffix files with timestamp or versions to force a reload into the user's browser cache. IIS7 Expire Web Content Open up your web app in IIS. Open up the sub-folders until you find the folder or file you want to ad an expiration date to. In IIS6 you used to right-click and select properties, no such luck in IIS7, double click HTTP Response. Once the window loads for the HTTP Response Headers, look to the Actions navigation bar to the right, all the way at the top select SET COMMON HEADERS. The Enable HTTP keep-alive will already be pre-selected. Go ahead and add the appropriate expiration header to the file or folder. Note that if you selected a folder, it will apply that setting to all images inside that folder and all nested content, even subfolders. So, two approaches, depending on what level or granularity you need.

    Read the article

  • ResourceSerializable: an alternate to ORM and ActiveRecord

    - by Levi Morrison
    A few opinionated reasons I don't like the traditional ORM and ActiveRecord patterns: They work only with a database. Sometimes I'm dealing with objects from an API and other objects from a database. All the implementations I have seen don't allow for that. Feel free to clue me in if I'm wrong on this. They are brittle. Changes in the database will likely break your implemenation. Some implementations can help reduce this, but a few of the ones I've seen don't. Their very design is influenced by the database. If I want to switch to using an API, I'll have to redesign the object to get it to work (likely). It seems to violate the single-responsibility pattern. They know what they are and how they act, but they also know how they are created, destroyed and saved? Seems a bit much. What about an approach that is somewhat more familiar in PHP: implementing an interface? In php 5.4, we'll have the JsonSerializable interface that defines the data to be json_encoded, so users will become accustomed to this type of thing. What if there was a ResourceSerializable interface? This is still an ORM by name, but certainly not by tradition. interface ResourceSerializable { /** * Returns the id that identifies the resource. */ function resourceId(); /** * Returns the 'type' of the resource. */ function resourceType(); /** * Returns the data to be serialized. */ function resourceSerialize(); } Things might be poorly named, I'll take suggestions. Notes: ResourceId will work for API's and databases. As long as your primary key in the database is the same as the resource ID in the API, there is no conflict. All of the API's I've worked with have a unique ID for the resource, so I don't see any issues there. ResourceType is the group or type associated with the resource. You can use this to map the resource to an API call or a database table. If the ResourceType was person, it could map to /api/1/person/{resourceId} and the table persons (or people, if it's smart enough). resourceSerialize() returns the data to be stored. Keys would identify API parameters and database table columns. This also seems easier to test than ActiveRecord / Orm implemenations. I haven't done much automated testing on traditional ActiveRecord/ORM implemenations, so this is merely a guess. But it seems that I being able to create objects independently of the library helps me. I don't have to use load() to get an existing resource, I can simply create one and set all the right properties. This is not so easy in the ActiveRecord / Orm implemenations I've dealt with. Downsides: You need another object to serialize it. This also means you have more code in general as you have to use more objects. You have to map resource types to API calls and database tables. This is even more work, but some ORMs and ActiveRecord implementations require you to map objects to table names anyway. Are there other downsides that you see? Does this seem feasible to you? How would you improve it? Note: I almost asked this on StackOverflow because it might be too vague for their standards, but I'm still not really familiar with programmers.stackexchange.com, so please help me improve my question if it doesn't shape up to standards here.

    Read the article

  • GoldenGate 12c - MySQL Active-Active Replication Setup

    - by Jinyu Wang-Oracle
    Active-active  (also called Master-Master or Bi-Directional) replication captures data changes from two or more systems and replicat the changes to synchronize the data.  Active-Active replication is often needed for high availability, load balancing and scaling out purposes.   Oracle GoldenGate is known to be one of the first and the best replication tool handling active-active replications. As of Oracle GoldenGate 12c, it provides (Refer to Oracle GoldenGate 12.1.2 Documentation - Configuring Oracle GoldenGate for Active-Active High Availability for more information) the followings: Robust loop-back prevention Comprehensive conflict resolution and detection support Heterogeneous support across different database versions and operation systems.  Oracle GoldenGate supports active-active configurations for DB2 on z/OS, LUW, and IBM i, MySQL, Oracle, SQL/MX,SQL Server, Sybase, and Teradata. However, the setup is different from database to database. In this example, I will show you how to setup an active-active data replication between two MySQL database instances. The example setup below is to have active-active replication between MySQL 5.5 and MySQL 5.6 instances and is shown as follows: MySQL 5.5 (Manager Port: 15105)  Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3305 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, PASSWORD mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl REPORTROLLOVER AT 05:30 ON saturday TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15106, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.5.38/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3305 targetdb test, userid root, password mysql sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge REPERROR (DEFAULT, ABEND) REPERROR(1062, IGNORE) map test.TCUSTMER, target test.TCUSTMER,colmap(usedefaults, region_code="region code"); map test.TCUSTORD, target test.TCUSTORD; MySQL 5.6 (Manager Port: 15106) Replicat replicat demorp01 setenv (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') dboptions host oraclelinux6.localdomain, connectionport 3306 targetdb test, userid root, password mysql --assumetargetdefs sourcedefs ./dirdat/replicat/democust.def discardfile ./dirrpt/demprp01.dsc, purge map test.TCUSTMER, target test.TCUSTMER, colmap(usedefaults, "region code"=region_code); map test.TCUSTORD, target test.TCUSTORD; Extract EXTRACT demoex01 SETENV (MYSQL_UNIX_PORT='/home/oracle/software/mysql_5.6.19/data/mysql.sock') DBOPTIONS CONNECTIONPORT 3306 DBOPTIONS HOST oraclelinux6.localdomain SOURCEDB test USERID root, USERID mysql EXTTRAIL ./dirdat/extract/de TRANLOGOPTIONS ALTLOGDEST "/usr/local/mysql56/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl TABLE test.TCUSTMER; TABLE test.TCUSTORD; Pump EXTRACT demopm01 RMTHOST localhost, MGRPORT 15105, COMPRESS, TIMEOUT 30 RMTTRAIL ./dirdat/replicat/ps PASSTHRU TABLE test.TCUSTMER; TABLE test.TCUSTORD; The setup parameters are quite self-explanatory. The key setup is to avoid the replication data  looping. Oracle GoldenGate for MySQL uses the information in the replication checkpoint table to identify the transaction applied by replicats and thus avoid extracting those transactions by Oracle GoldenGate extracts. The example setup in the extract in MySQL 5.5 instance is shown as follows.  TRANLOGOPTIONS ALTLOGDEST "/home/oracle/software/mysql_5.5.38/data/binlog/bin-log.index" FILTERTABLE test.checkpoint_tbl Setting up an active-active replication is often more complicated than this and requires the following additional considerations. I would elaborate on this in the follow-up discussions. 

    Read the article

  • Wifi installation issues on Ubuntu 10.10

    - by SlyrNemesis
    Linux newbie here, anyway so here is the problem, I run Ubuntu 10.10 and I have a Sitecom 300N x2 Wireless Network dongle with chipset 8192SU, I used ndiswrapper to install my Windows Wireless driver because Sitecom doesn't have a linux driver, it says hardware present but it doesn't find any Wireless networks, nor does it connect to one. What can I do? The command "dmesg | grep ndis" gave this output in the terminal: [ 9.999954] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 11.111901] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateNetBufferAndNetBufferList' [ 11.111973] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMIndicateReceiveNetBufferLists' [ 11.112099] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMRegisterMiniportDriver' [ 11.112161] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateMdl' [ 11.112220] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMDeregisterMiniportDriver' [ 11.112280] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeNetBufferListPool' [ 11.112339] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateNetBufferListPool' [ 11.112399] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeMdl' [ 11.112457] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMAllocatePort' [ 11.112515] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMNetPnPEvent' [ 11.112573] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMFreePort' [ 11.112631] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMSendNetBufferListsComplete' [ 11.112780] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMSetMiniportAttributes' [ 11.112848] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisOpenConfigurationEx' [ 11.112946] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMIndicateStatusEx' [ 11.113017] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMOidRequestComplete' [ 11.113112] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateMemoryWithTagPriority' [ 11.113200] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateIoWorkItem' [ 11.113271] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeIoWorkItem' [ 11.113342] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisQueueIoWorkItem' [ 11.113413] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeNetBufferList' [ 11.113481] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionBind' [ 11.113547] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionBindClass' [ 11.113613] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionUnbindClass' [ 11.113680] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionUnbind' [ 11.113742] ndiswrapper (load_sys_files:206): couldn't prepare driver 'net8192su' [ 11.148888] ndiswrapper (load_wrap_driver:108): couldn't load driver net8192su; check system log for messages from 'loadndisdriver' [ 11.365200] usbcore: registered new interface driver ndiswrapper [ 12.818573] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.819183] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.819796] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.820505] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.821115] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.821726] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.822339] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.822948] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.823560] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.824204] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy

    Read the article

  • The All New Hotmail Looks Very Impressive [Video Tour]

    - by Gopinath
    With loads of new new features being introduced into GMail every now and then, Microsoft can’t sit and relax any more. Microsoft realized this and worked hard to introduce really impressive features in upcoming version of Windows Live Hotmail that was previewed couple of days ago. Most of the new features announced in the upcoming version are focusing on the important need of email users – de-clutter the mail box and effectively manage email over load easily. Here is the list highlight of new features New Features Sweep away clutter – This is the most impressive in the set of new features. It allows you to manage email overload. If you’ve subscribed to a newsletter but decided to not to allow it into your inbox, you can activate the sweep feature to move all the messages of the newsletter in to a folder other than your inbox. This may sound similar to filters option in GMail but the workflow is very easy in Hotmail. Quickly find message – Easy to use options are provided to see mails in separate views likes mails from contacts, social networking mail, mails from e-mail subscription services, etc. Now it’s easy to prioritize email checking like how you wish to. I prefer to check mails from my contacts first, then social networking messages and then the newsletter subscriptions. Improved spam detection – The span detection rules are tightened for better spam protection and also hotmail learns from user actions to effectively catch spam No more mail box storage restrictions – With a smart decision of Microsoft, users  no longer need to worry about the storage restrictions of their mail box – large attachments of hotmail can be stored in Windows Live SkyDrive. With Hotmail, we’ve combined the simplicity of sending photos through email with the power of Windows Live SkyDrive so that you can send up to 200 photos, each up to 50 MB in size, all in a single email. You can send all your vacation photos at once without worrying about attachment limits, Excellent Integration With Office Web Apps -  View and editing of office documents attached to the emails are made very easy by integrating Office Web Apps with Hotmail. When you receive a document/presentation/spreadsheet in hotmail, you can view it, edit it, save it or even you can send the modified document to original sender – all these without leaving hotmail. Inline viewing options for Photos, Videos, Social Network Messages – You can view photos embedded in the mail as slideshows(with the help of SilverLight), YouTube  & Hulu videos can be played inline  and track shipping notifications. Threaded conversations – emails in Hotmail are grouped just like it happens in GMail Others - enhanced account protection, full-session SSL, multiple email accounts, subfolders, contact management Video Tour Of New Features Here is an impressive video tour of new Hotmail features. When are these new features coming to Hotmail? Majority of the new features announced today are rolled out in coming weeks gradually to all the users. But advanced features like Office Integration with Hotmail is expected to take couple of months for general availability. Will You Switch back to Hotmail? Will these features lure GMail/Yahoo users to switch back to Hotmail? May be not immediately but these features may hold the existing users from leaving Hotmail. I used Hotmail, in the pre GMail era and now I use  Hotmail id only to sign-in to Microsoft websites that requites Hotmail authentication. It’s been years since I composed a new email in Hotmail. Even though the new features announced by Hotmail are very impressive, I like the way how GMail rapidly brings new features at regular intervals. If Hotmail also keeps innovating with new features at regular intervals, then there are good chances for it’s old users to return home. Join us on Facebook to read all our stories right inside your Facebook news feed.

    Read the article

  • What is the right way to process inconsistent data files?

    - by Tahabi
    I'm working at a company that uses Excel files to store product data, specifically, test results from products before they are shipped out. There are a few thousand spreadsheets with anywhere from 50-100 relevant data points per file. Over the years, the schema for the spreadsheets has changed significantly, but not unidirectionally - in the sense that, changes often get reverted and then re-added in the space of a few dozen to few hundred files. My project is to convert about 8000 of these spreadsheets into a database that can be queried. I'm using MongoDB to deal with the inconsistency in the data, and Python. My question is, what is the "right" or canonical way to deal with the huge variance in my source files? I've written a data structure which stores the data I want for the latest template, which will be the final template used going forward, but that only helps for a few hundred files historically. Brute-forcing a solution would mean writing similar data structures for each version/template - which means potentially writing hundreds of schemas with dozens of fields each. This seems very inefficient, especially when sometimes a change in the template is as little as moving a single line of data one row down or splitting what used to be one data field into two data fields. A slightly more elegant solution I have in mind would be writing schemas for all the variants I can find for pre-defined groups in the source files, and then writing a function to match a particular series of files with a series of variants that matches that set of files. This is because, more often that not, most of the file will remain consistent over a long period, only marred by one or two errant sections, but inside the period, which section is inconsistent, is inconsistent. For example, say a file has four sections with three data fields, which is represented by four Python dictionaries with three keys each. For files 7000-7250, sections 1-3 will be consistent, but section 4 will be shifted one row down. For files 7251-7500, 1-3 are consistent, section 4 is one row down, but a section five appears. For files 7501-7635, sections 1 and 3 will be consistent, but section 2 will have five data fields instead of three, section five disappears, and section 4 is still shifted down one row. For files 7636-7800, section 1 is consistent, section 4 gets shifted back up, section 2 returns to three cells, but section 3 is removed entirely. Files 7800-8000 have everything in order. The proposed function would take the file number and match it to a dictionary representing the data mappings for different variants of each section. For example, a section_four_variants dictionary might have two members, one for the shifted-down version, and one for the normal version, a section_two_variants might have three and five field members, etc. The script would then read the matchings, load the correct mapping, extract the data, and insert it into the database. Is this an accepted/right way to go about solving this problem? Should I structure things differently? I don't know what to search Google for either to see what other solutions might be, though I believe the problem lies in the domain of ETL processing. I also have no formal CS training aside from what I've taught myself over the years. If this is not the right forum for this question, please tell me where to move it, if at all. Any help is most appreciated. Thank you.

    Read the article

  • Tuning Red Gate: #1 of Many

    - by Grant Fritchey
    Everyone runs into performance issues at some point. Same thing goes for Red Gate software. Some of our internal systems were running into some serious bottlenecks. It just so happens that we have this nice little SQL Server monitoring tool. What if I were to, oh, I don't know, use the monitoring tool to identify the bottlenecks, figure out the causes and then apply a fix (where possible) and then start the whole thing all over again? Just a crazy thought. OK, I was asked to. This is my first time looking through these servers, so here's how I'd go about using SQL Monitor to get a quick health check, sort of like checking the vitals on a patient. First time opening up our internal SQL Monitor instance and I was greeted with this: Oh my. Maybe I need to get our internal guys to read my blog. Anyway, I know that there are two servers where most of the load is. I'll drill down on the first. I'm selecting the server, not the instance, by clicking on the server name. That opens up the Global Overview page for the server. The information here much more applicable to the "oh my gosh, I have a problem now" type of monitoring. But, looking at this, I am seeing something immediately. There are four(4) drives on the system. The C:\ has an average read time of 16.9ms, more than double the others. Is that a problem? Not sure, but it's something I'll look at. It's write time is higher too. I'll keep drilling down, first, to the unclosed alerts on the server. Now things get interesting. SQL Monitor has a number of different types of alerts, some related to error states, others to service status, and then some related to performance. Guess what I'm seeing a bunch of right here: Long running queries and long job durations. If you check the dates, they're all recent, within the last 24 hours. If they had just been old, uncleared alerts, I wouldn't be that concerned. But with all these, all performance related, and all in the last 24 hours, yeah, I'm concerned. At this point, I could just start responding to the Alerts. If I click on one of the the Long-running query alerts, I'll get all kinds of cool data that can help me determine why the query ran long. But, I'm not in a reactive mode here yet. I'm still gathering data, trying to understand how the server works. I have the information that we're generating a lot of performance alerts, let's sock that away for the moment. Instead, I'm going to back up and look at the Global Overview for the SQL Instance. It shows all the databases on the server and their status. Then it shows a number of basic metrics about the SQL Server instance, again for that "what's happening now" view or things. Then, down at the bottom, there is the Top 10 expensive queries list: This is great stuff. And no, not because I can see the top queries for the last 5 minutes, but because I can adjust that out 3 days. Now I can see where some serious pain is occurring over the last few days. Databases have been blocked out to protect the guilty. That's it for the moment. I have enough knowledge of what's going on in the system that I can start to try to figure out why the system is running slowly. But, I want to look a little more at some historical data, to understand better how this server is behaving. More next time.

    Read the article

  • Wifi installation issues with a Sitecom 300N x2 Wireless Network dongle

    - by SlyrNemesis
    Linux newbie here, anyway so here is the problem, I run Ubuntu 10.10 and I have a Sitecom 300N x2 Wireless Network dongle with chipset 8192SU, I used ndiswrapper to install my Windows Wireless driver because Sitecom doesn't have a linux driver, it says hardware present but it doesn't find any Wireless networks, nor does it connect to one. What can I do? The command "dmesg | grep ndis" gave this output in the terminal: [ 9.999954] ndiswrapper version 1.56 loaded (smp=yes, preempt=no) [ 11.111901] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateNetBufferAndNetBufferList' [ 11.111973] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMIndicateReceiveNetBufferLists' [ 11.112099] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMRegisterMiniportDriver' [ 11.112161] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateMdl' [ 11.112220] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMDeregisterMiniportDriver' [ 11.112280] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeNetBufferListPool' [ 11.112339] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateNetBufferListPool' [ 11.112399] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeMdl' [ 11.112457] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMAllocatePort' [ 11.112515] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMNetPnPEvent' [ 11.112573] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMFreePort' [ 11.112631] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMSendNetBufferListsComplete' [ 11.112780] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMSetMiniportAttributes' [ 11.112848] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisOpenConfigurationEx' [ 11.112946] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMIndicateStatusEx' [ 11.113017] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisMOidRequestComplete' [ 11.113112] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateMemoryWithTagPriority' [ 11.113200] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisAllocateIoWorkItem' [ 11.113271] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeIoWorkItem' [ 11.113342] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisQueueIoWorkItem' [ 11.113413] ndiswrapper (import:233): unknown symbol: NDIS.SYS:'NdisFreeNetBufferList' [ 11.113481] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionBind' [ 11.113547] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionBindClass' [ 11.113613] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionUnbindClass' [ 11.113680] ndiswrapper (import:233): unknown symbol: WDFLDR.SYS:'WdfVersionUnbind' [ 11.113742] ndiswrapper (load_sys_files:206): couldn't prepare driver 'net8192su' [ 11.148888] ndiswrapper (load_wrap_driver:108): couldn't load driver net8192su; check system log for messages from 'loadndisdriver' [ 11.365200] usbcore: registered new interface driver ndiswrapper [ 12.818573] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.819183] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.819796] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.820505] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.821115] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.821726] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.822339] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.822948] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.823560] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy [ 12.824204] Modules linked in: snd_wavefront snd_emu10k1(+) snd_cs4236 snd_usb_audio snd_wss_lib snd_opl3_lib snd_ac97_codec ac97_bus nouveau(+) snd_pcm i915 snd_usbmidi_lib snd_util_mem snd_page_alloc snd_hwdep snd_mpu401 snd_mpu401_uart snd_seq_midi snd_rawmidi ttm snd_seq_midi_event drm_kms_helper snd_seq ppdev snd_timer snd_seq_device drm ndiswrapper snd parport_pc emu10k1_gp intel_agp ns558 gameport soundcore i2c_algo_bit shpchp lp video output agpgart parport usbhid hid 8139too 8139cp mii floppy

    Read the article

< Previous Page | 680 681 682 683 684 685 686 687 688 689 690 691  | Next Page >