Search Results

Search found 16134 results on 646 pages for 'reference guide'.

Page 467/646 | < Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >

  • Month in Geek: December 2010 Edition

    - by Asian Angel
    As 2010 draws to a close, we have gathered together another great batch of article goodness for your reading enjoyment. Here are our ten hottest articles for December. Note: Articles are listed as #10 through #1. The 50 Best How-To Geek Windows Articles of 2010 Even though we cover plenty of other topics, Windows has always been a primary focus around here, and we’ve got one of the largest collections of Windows-related how-to articles anywhere. Here’s the fifty best Windows articles that we wrote in 2010. Read the article Desktop Fun: Happy New Year Wallpaper Collection [Bonus Edition] As this year draws to a close, it is a time to reflect back on what we have done this year and to look forward to the new one. To help commemorate the event we have put together a bonus size edition of Happy New Year wallpapers for your desktops. Read the article LCD? LED? Plasma? The How-To Geek Guide to HDTV Technology With image technology progressing faster than ever, High-Def has become the standard, giving TV buyers more options at cheaper prices. But what’s different in all these confusing TVs, and what should you know before buying one? Read the article HTG Explains: Which Linux File System Should You Choose? File systems are one of the layers beneath your operating system that you don’t think about—unless you’re faced with the plethora of options in Linux. Here’s how to make an educated decision on which file system to use. Read the article Desktop Fun: Merry Christmas Fonts Christmas will soon be here and there are lots of cards, invitations, gift tags, photos, and more to prepare beforehand. To help you get ready we have gathered together a great collection of fun holiday fonts to help turn those ordinary looking holiday items into extraordinary looking ones. Read the article Microsoft Security Essentials 2.0 Kills Viruses Dead. Download It Now. Microsoft’s Security Essentials has been our favorite anti-malware application for a while—it’s free, unobtrusive, and it doesn’t slow your PC down, but now it’s even better with the new 2.0 release, which adds network filtering, heuristic protection, and more. Read the article 20 OS X Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known OS X shortcuts to help you become a keyboard ninja. Read the article 20 Windows Keyboard Shortcuts You Might Not Know Mastering the keyboard will not only increase your navigation speed but it can also help with wrist fatigue. Here are some lesser known Windows shortcuts to help you become a keyboard ninja. Read the article The 50 Best Registry Hacks that Make Windows Better We’re big fans of hacking the Windows Registry around here, and we’ve got one of the biggest collections of registry hacks you’ll find. Don’t believe us? Here’s a list of the top 50 registry hacks that we’ve covered. Read the article The Complete List of iPad Tips, Tricks, and Tutorials The Apple iPad is an amazing tablet, and to help you get the most out of it, we’ve put together a comprehensive list of every tip, trick, and tutorial for you. Read on for more. Read the article Latest Features How-To Geek ETC The 20 Best How-To Geek Linux Articles of 2010 The 50 Best How-To Geek Windows Articles of 2010 The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Tune Pop Enhances Android Music Notifications Another Busy Night in Gotham City Wallpaper Classic Super Mario Brothers Theme for Chrome and Iron Experimental Firefox Builds Put Tabs on the Title Bar (Available for Download) Android Trojan Found in the Wild Chaos, Panic, and Disorder Wallpaper

    Read the article

  • SQLAuthority News – MS Access Database is the Way to Go – April 1st Humor

    - by pinaldave
    First of all, today is April 1- April Fool’s Day, so I have written this post for some light entertainment. My friend has just sent me an email about why a person should go for Access Database. For a short background, I used to be an MS Access user once (I will not call myself MS Access DBA), and I must say I had a good time with Database at that time. As time passed by, I moved from MS Access to SQL Server. Well, as for my friend’s email, his reasons considering MS Access usage really made me laugh. MS Access may have a few points where it totally makes sense to use it. However, in the email that I received, there was not a single reason which was valid.  In fact, I thought it is an April 1st joke- just delivered a little earlier. Let us see some of the reasons from that email. Thanks to Mahesh Bhesania for sending this email to me. MS Access comes with lots of free stuff, e.g. MS Excel MS Access is the most preferred desktop database system MS Access can import data from MS Excel and SQL Server MS Access provides a real time database MS Access has a free IDE-to-VB Script MS Access fits well in your hard drive I actually think that the above points are either incorrect beliefs of some users, or someone just wrote them to give some laughter with such inaccurate data. And, for the same reason I decided to browse the Internet and do some research on MS Access database to verify my thoughts. While searching on this subject, I found the following two interesting statements from the site: Microsoft Access Database, Why Choose It? Other software manufacturers are more likely to provide interfaces to MS Access than any other desktop database system Microsoft Access consulting rates are typically lower for Access consultants compared to Oracle or SQL Server consultants The second one is may be the worst reason for you to switch to MS Access if you are already an SQL Server consultant. With this cartoon, have you ever felt like you were one of these chickens at some point in time? I guess that the moment might have just happened before the minute we say “I guess we were on the same page?” Does this mean we are IN the same table, or ON the same table?! (I accept bad joke!) It is All Fools’ Day after all, so just laugh! If you have something funny but non-offensive to share, just  leave your comment here. Reference: Pinal Dave (http://blog.SQLAuthority.com), Cartoon source unknown. Filed under: Software Development, SQL, SQL Authority, SQL Humor, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: MS ACCESS

    Read the article

  • IIS 7&rsquo;s Sneaky Secret to Get COM-InterOp to Run

    - by David Hoerster
    Originally posted on: http://geekswithblogs.net/DavidHoerster/archive/2013/06/17/iis-7rsquos-sneaky-secret-to-get-com-interop-to-run.aspxIf you’re like me, you don’t really do a lot with COM components these days.  For me, I’ve been ‘lucky’ to stay in the managed world for the past 6 or 7 years. Until last week. I’m running a project to upgrade a web interface to an older COM-based application.  The old web interface is all classic ASP and lots of tables, in-line styles and a bunch of other late 90’s and early 2000’s goodies.  So in addition to updating the UI to be more modern looking and responsive, I decided to give the server side an update, too.  So I built some COM-InterOp DLL’s (easily through VS2012’s Add Reference feature…nothing new here) and built a test console line app to make sure the COM DLL’s were actually built according to the COM spec.  There’s a document management system that I’m thinking of whose COM DLLs were not proper COM DLLs and crashed and burned every time .NET tried to call them through a COM-InterOp layer. Anyway, my test app worked like a champ and I felt confident that I could build a nice façade around the COM DLL’s and wrap some functionality internally and only expose to my users/clients what they really needed. So I did this, built some tests and also built a test web app to make sure everything worked great.  It did.  It ran fine in IIS Express via Visual Studio 2012, and the timings were very close to the pure Classic ASP calls, so there wasn’t much overhead involved going through the COM-InterOp layer. You know where this is going, don’t you? So I deployed my test app to a DEV server running IIS 7.5.  When I went to my first test page that called the COM-InterOp layer, I got this pretty message: Retrieving the COM class factory for component with CLSID {81C08CAE-1453-11D4-BEBC-00500457076D} failed due to the following error: 80040154 Class not registered (Exception from HRESULT: 0x80040154 (REGDB_E_CLASSNOTREG)). It worked as a console app and while running under IIS Express, so it must be permissions, right?  I gave every account I could think of all sorts of COM+ rights and nothing, nada, zilch! Then I came across this question on Experts Exchange, and at the bottom of the page, someone mentioned that the app pool should be running to allow 32-bit apps to run.  Oh yeah, my machine is 64-bit; these COM DLL’s I’m using are old and are definitely 32-bit.  I didn’t check for that and didn’t even think about that.  But I went ahead and looked at the app pool that my web site was running under and what did I see?  Yep, select your app pool in IIS 7.x, click on Advanced Settings and check for “Enable 32-bit Applications”. I went ahead and set it to True and my test application suddenly worked. Hope this helps somebody out there from pulling out your hair.

    Read the article

  • SQL SERVER – Difference Between DATETIME and DATETIME2

    - by pinaldave
    Yesterday I have written a very quick blog post on SQL SERVER – Difference Between GETDATE and SYSDATETIME and I got tremendous response for the same. I suggest you read that blog post before continuing this blog post today. I had asked people to honestly take part and share their view about above two system function. There are few emails as well few comments on the blog post asking question how did I come to know the difference between the same. The answer is real world issues. I was called in for performance tuning consultancy where I was asked very strange question by one developer. Here is the situation he was facing. System had a single table with two different column of datetime. One column was datelastmodified and second column was datefirstmodified. One of the column was DATETIME and another was DATETIME2. Developer was populating them with SYSDATETIME respectively. He was always thinking that the value inserted in the table will be the same. This table was only accessed by INSERT statement and there was no updates done over it in application.One fine day he ran distinct on both of this column and was in for surprise. He always thought that both of the table will have same data, but in fact they had very different data. He presented this scenario to me. I said this can not be possible but when looked at the resultset, I had to agree with him. Here is the simple script generated to demonstrate the problem he was facing. This is just a sample of original table. DECLARE @Intveral INT SET @Intveral = 10000 CREATE TABLE #TimeTable (FirstDate DATETIME, LastDate DATETIME2) WHILE (@Intveral > 0) BEGIN INSERT #TimeTable (FirstDate, LastDate) VALUES (SYSDATETIME(), SYSDATETIME()) SET @Intveral = @Intveral - 1 END GO SELECT COUNT(DISTINCT FirstDate) D_GETDATE, COUNT(DISTINCT LastDate) D_SYSGETDATE FROM #TimeTable GO SELECT DISTINCT a.FirstDate, b.LastDate FROM #TimeTable a INNER JOIN #TimeTable b ON a.FirstDate = b.LastDate GO SELECT * FROM #TimeTable GO DROP TABLE #TimeTable GO Let us see the resultset. You can clearly see from result that SYSDATETIME() does not populate the same value in the both of the field. In fact the value is either rounded down or rounded up in the field which is DATETIME. Event though we are populating the same value, the values are totally different in both the column resulting the SELF JOIN fail and display different DISTINCT values. The best policy is if you are using DATETIME use GETDATE() and if you are suing DATETIME2 use SYSDATETIME() to populate them with current date and time to accurately address the precision. As DATETIME2 is introduced in SQL Server 2008, above script will only work with SQL SErver 2008 and later versions. I hope I have answered few questions asked yesterday. Reference: Pinal Dave (http://www.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL DateTime, SQL Optimization, SQL Performance, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Localization in ASP.NET MVC 2 using ModelMetadata

    - by rajbk
    This post uses an MVC 2 RTM application inside VS 2010 that is targeting the .NET Framework 4. .NET 4 DataAnnotations comes with a new Display attribute that has several properties including specifying the value that is used for display in the UI and a ResourceType. Unfortunately, this attribute is new and is not supported in MVC 2 RTM. The good news is it will be supported and is currently available in the MVC Futures release. The steps to get this working are shown below: Download the MVC futures library   Add a reference to the Microsoft.Web.MVC.AspNet4 dll.   Add a folder in your MVC project where you will store the resx files   Open the resx file and change “Access Modifier” to “Public”. This allows the resources to accessible from other assemblies. Internaly, it changes the “Custom Tool” used to generate the code behind from  ResXFileCodeGenerator to “PublicResXFileCodeGenerator”    Add your localized strings in the resx.   Register the new ModelMetadataProvider protected void Application_Start() { AreaRegistration.RegisterAllAreas();   RegisterRoutes(RouteTable.Routes);   //Add this ModelMetadataProviders.Current = new DataAnnotations4ModelMetadataProvider(); DataAnnotations4ModelValidatorProvider.RegisterProvider(); }   Use the Display attribute in your Model public class Employee { [Display(Name="ID")] public int ID { get; set; }   [Display(ResourceType = typeof(Common), Name="Name")] public string Name { get; set; } } Use the new HTML UI Helpers in your strongly typed view: <%: Html.EditorForModel() %> <%: Html.EditorFor(m => m) %> <%: Html.LabelFor(m => m.Name) %> ..and you are good to go. Adventure is out there!

    Read the article

  • SQL SERVER – Developer Training Kit for SQL Server 2012

    - by pinaldave
    Developer Training Kit is my favorite part of any product. The reason behind is very simple because it give the single resource which gives complete overview of the product in nutshell. A developer can learn from many places – books, webcasts, tutorials, blogs, etc. However, I have found that developer training kits are the best starting point for any product. Start with them first, see what are the new features as well what is the new message a product is coming up with. Once it is learned the very next step should be to identify the right learning material to explore the preferred topic. The SQL Server 2012 Developer Training Kit includes technical content including labs, demos and presentations designed to help you learn how to develop SQL Server 2012 database and BI solutions. New and updated content will be released periodically and can be downloaded on-demand using the Web Installer. Download SQL Server 2012 Developer Training Kit Web Installer. This training kit was available earlier this year but it is never late to explore it if you have not referred it earlier. Additionally, if you do not want to download complete kit all together I suggest you refer to Wiki here. This wiki contains all the same presentations and demo notes which web installer contains. Refer to SQL Server 2012 Developer Training Kit Wiki Wiki contains following module and details about Hands On Labs Module 1: Introduction to SQL Server 2012 Module 2: Introduction to SQL Server 2012 AlwaysOn Module 3: Exploring and Managing SQL Server 2012 Database Engine Improvements Module 4: SQL Server 2012 Database Server Programmability Module 5: SQL Server 2012 Application Development Module 6: SQL Server 2012 Enterprise Information Management Module 7: SQL Server 2012 Business Intelligence Hands-On Labs: SQL Server 2012 Database Engine Hands-On Labs: Visual Studio 2010 and .NET 4.0 Hands-On Labs: SQL Server 2012 Enterprise Information Management Hands-On Labs: SQL Server 2012 Business Intelligence Hands-On LabsHands-On Labs: Windows Azure and SQL Azure As I said, if you have not downloaded this so far, it is never late to explore it. Trust me you will atleast learn one thing if you just explore the content. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Developer Training, PostADay, SQL, SQL Authority, SQL Documentation, SQL Download, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • SQL SERVER – What is MDS? – Master Data Services in Microsoft SQL Server 2008 R2

    - by pinaldave
    What is MDS? Master Data Services helps enterprises standardize the data people rely on to make critical business decisions. With Master Data Services, IT organizations can centrally manage critical data assets company wide and across diverse systems, enable more people to securely manage master data directly, and ensure the integrity of information over time. (Source: Microsoft) Today I will be talking about the same subject at Microsoft TechEd India. If you want to learn about how to standardize your data and apply the business rules to validate data you must attend my session. MDS is very interesting concept, I will cover super short but very interesting 10 quick slides about this subject. I will make sure in very first 20 mins, you will understand following topics Introduction to Master Data Management What is Master Data and Challenges MDM Challenges and Advantage Microsoft Master Data Services Benefits and Key Features Uses of MDS Capabilities Key Features of MDS This slides decks will be followed by around 30 mins demo which will have story of entity, hierarchies, versions, security, consolidation and collection. I will be tell this story keeping business rules in center. We take one business rule which will be simple validation rule and will make it much more complex and yet very useful to product. I will also demonstrate few real life scenario where I will be talking about MDS and its usage. Do not miss this session. At the end of session there will be book awarded to best participant. My session details: Session: Master Data Services in Microsoft SQL Server 2008 R2 Date: April 12, 2010  Time: 2:30pm-3:30pm SQL Server Master Data Services will ship with SQL Server 2008 R2 and will improve Microsoft’s platform appeal. This session provides an in depth demonstration of MDS features and highlights important usage scenarios. Master Data Services enables consistent decision making by allowing you to create, manage and propagate changes from single master view of your business entities. Also with MDS – Master Data-hub which is the vital component helps ensure reporting consistency across systems and deliver faster more accurate results across the enterprise. We will talk about establishing the basis for a centralized approach to defining, deploying, and managing master data in the enterprise. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Data Warehousing, MVP, Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority Author Visit, T SQL, Technology Tagged: TechEd, TechEdIn

    Read the article

  • This Week in Geek History: Gmail Goes Public, Deep Blue Wins at Chess, and the Birth of Thomas Edison

    - by Jason Fitzpatrick
    Every week we bring you a snapshot of the week in Geek History. This week we’re taking a peek at the public release of Gmail, the first time a computer won against a chess champion, and the birth of prolific inventor Thomas Edison. Gmail Goes Public It’s hard to believe that Gmail has only been around for seven years and that for the first three years of its life it was invite only. In 2007 Gmail dropped the invite only requirement (although they would hold onto the “beta” tag for another two years) and opened its doors for anyone to grab a username @gmail. For what seemed like an entire epoch in internet history Gmail had the slickest web-based email around with constant innovations and features rolling out from Gmail Labs. Only in the last year or so have major overhauls at competitors like Hotmail and Yahoo! Mail brought other services up to speed. Can’t stand reading a Week in Geek History entry without a random fact? Here you go: gmail.com was originally owned by the Garfield franchise and ran a service that delivered Garfield comics to your email inbox. No, we’re not kidding. Deep Blue Proves Itself a Chess Master Deep Blue was a super computer constructed by IBM with the sole purpose of winning chess matches. In 2011 with the all seeing eye of Google and the amazing computational abilities of engines like Wolfram Alpha we simply take powerful computers immersed in our daily lives for granted. The 1996 match against reigning world chest champion Garry Kasparov where in Deep Blue held its own, but ultimately lost, in a  4-2 match shook a lot of people up. What did it mean if something that was considered such an elegant and quintessentially human endeavor such as chess was so easy for a machine? A series of upgrades helped Deep Blue outright win a match against Kasparov in 1997 (seen in the photo above). After the win Deep Blue was retired and disassembled. Parts of Deep Blue are housed in the National Museum of History and the Computer History Museum. Birth of Thomas Edison Thomas Alva Edison was one of the most prolific inventors in history and holds an astounding 1,093 US Patents. He is responsible for outright inventing or greatly refining major innovations in the history of world culture including the phonograph, the movie camera, the carbon microphone used in nearly every telephone well into the 1980s, batteries for electric cars (a notion we’d take over a century to take seriously), voting machines, and of course his enormous contribution to electric distribution systems. Despite the role of scientist and inventor being largely unglamorous, Thomas Edison and his tumultuous relationship with fellow inventor Nikola Tesla have been fodder for everything from books, to comics, to movies, and video games. Other Notable Moments from This Week in Geek History Although we only shine the spotlight on three interesting facts a week in our Geek History column, that doesn’t mean we don’t have space to highlight a few more in passing. This week in Geek History: 1971 – Apollo 14 returns to Earth after third Lunar mission. 1974 – Birth of Robot Chicken creator Seth Green. 1986 – Death of Dune creator Frank Herbert. Goodnight Dune. 1997 – Simpsons becomes longest running animated show on television. Have an interesting bit of geek trivia to share? Shoot us an email to [email protected] with “history” in the subject line and we’ll be sure to add it to our list of trivia. Latest Features How-To Geek ETC Here’s a Super Simple Trick to Defeating Fake Anti-Virus Malware How to Change the Default Application for Android Tasks Stop Believing TV’s Lies: The Real Truth About "Enhancing" Images The How-To Geek Valentine’s Day Gift Guide Inspire Geek Love with These Hilarious Geek Valentines RGB? CMYK? Alpha? What Are Image Channels and What Do They Mean? Clean Up Google Calendar’s Interface in Chrome and Iron The Rise and Fall of Kramerica? [Seinfeld Video] GNOME Shell 3 Live CDs for OpenSUSE and Fedora Available for Testing Picplz Offers Special FX, Sharing, and Backup of Your Smartphone Pics BUILD! An Epic LEGO Stop Motion Film [VIDEO] The Lingering Glow of Sunset over a Winter Landscape Wallpaper

    Read the article

  • Ubuntu 12.04 patched b43 driver compilation error

    - by Zed
    I tried this How do I install this patched b43 driver? guide to install patched b43 driver on Ubuntu 12.04 with 3.2.0-31-generic kernel but I can't pass compilation phase.Here is what I did: wget http://www.orbit-lab.org/kernel/compat-wireless-3-stable/v3.1/compat-wireless-3.1.1-1.tar.bz2 cd compat-wireless-3.1.1-1/ scripts/driver-select b43 make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.29.h:5:0, from /home/marco/compat-wireless-3.1.1-1/include/linux/compat-2.6.h:24, from <command-line>:0: include/linux/netdevice.h:1153:5: warning: "IS_ENABLED" is not defined [-Wundef] include/linux/netdevice.h:1153:15: error: missing binary operator before token "(" include/linux/netdevice.h: In function ‘netdev_uses_dsa_tags’: include/linux/netdevice.h:1421:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1422:31: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h: In function ‘netdev_uses_trailer_tags’: include/linux/netdevice.h:1431:9: error: ‘struct net_device’ has no member named ‘dsa_ptr’ include/linux/netdevice.h:1432:35: error: ‘struct net_device’ has no member named ‘dsa_ptr’ make[3]: *** [/home/marco/compat-wireless-3.1.1-1/compat/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/compat] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 To fix that error I added #include <linux/kconfig.h> to /usr/src/linux-headers-3.2.0-31-generic/include/linux/netdevice.h but now I'm getting something else make make -C /lib/modules/3.2.0-31-generic/build M=/home/marco/compat-wireless-3.1.1-1 modules make[1]: Entering directory `/usr/src/linux-headers-3.2.0-31-generic' CC [M] /home/marco/compat-wireless-3.1.1-1/compat/main.o LD [M] /home/marco/compat-wireless-3.1.1-1/compat/compat.o CC [M] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o In file included from /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:9:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h: In function ‘ssb_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/include/linux/ssb/ssb.h:236:36: note: each undeclared identifier is reported only once for each function it appears in In file included from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/bcma_private.h:8:0, from /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:8: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h: In function ‘bcma_driver_register’: /home/marco/compat-wireless-3.1.1-1/include/linux/bcma/bcma.h:170:37: error: ‘THIS_MODULE’ undeclared (first use in this function) /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c: At top level: /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:12:20: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:13:16: error: expected declaration specifiers or ‘...’ before string constant /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:182:1: warning: parameter names (without types) in function declaration [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: data definition has no type or storage class [enabled by default] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: type defaults to ‘int’ in declaration of ‘EXPORT_SYMBOL_GPL’ [-Wimplicit-int] /home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.c:188:1: warning: parameter names (without types) in function declaration [enabled by default] make[3]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma/main.o] Error 1 make[2]: *** [/home/marco/compat-wireless-3.1.1-1/drivers/bcma] Error 2 make[1]: *** [_module_/home/marco/compat-wireless-3.1.1-1] Error 2 make[1]: Leaving directory `/usr/src/linux-headers-3.2.0-31-generic' make: *** [modules] Error 2 Any suggestion what to try next?

    Read the article

  • MySQL – How to Find mysqld.exe with Command Prompt – Fix: ‘mysql’ is not recognized as an internal or external command, operable program or batch file

    - by Pinal Dave
    One of the most popular question I get after watching my MySQL courses on Pluralsight is that beginning users are not able to find where they have installed MySQL Server. The error they receive is as follows when they type mysqld command on their default command line. ‘mysql‘ is not recognized as an internal or external command, operable program or batch file. This error comes up if user try to execute mysqld command on default command prompt. The user should execute this command where mysql.exe file exists.  If you are using Windows Explorer you can easily search on your drive mysqld.exe and find the location of the file and execute the above command there. However, if you want to find out with command prompt the location of mysqld.exe file you can follow the direction here. Step 1: Open a command prompt Open command prompt from Start >> Run >> cmd >> enter Step 2: Change directory You need to change the default directory to root directory, hence type cd\ command on the prompt to change the default directory to c:\ . Here we are assuming that you have installed MySQL on your c: drive. If you have installed it on any other drive change the drive to that letter. Step 3: Search Drive Type the command dir mysqld.exe /s /p on the command prompt. It will search your directories and will list the directory where mysqld.exe is located. Step 4: Change Directory Now once again change your command prompt file location to the folder where your mysqld.exe is located. In my case it is located here in folder C:\Program Files\MySQL\MySQL Server 5.6\bin hence I will run following command: cd C:\Program Files\MySQL\MySQL Server 5.6\bin . Step 5: Execute mysqld.exe Now you can once again mysqld.exe on your command prompt. You can use this method to search pretty much any file with the help of command prompt. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: MySQL, PostADay, SQL, SQL Authority, SQL Query, SQL Tips and Tricks, T SQL

    Read the article

  • The Importance of a Security Assessment - by Michael Terra, Oracle

    - by Darin Pendergraft
    Today's Blog was written by Michael Terra, who was the Subject Matter Expert for the recently announced Oracle Online Security Assessment. You can take the Online Assessment here: Take the Online Assessment Over the past decade, IT Security has become a recognized and respected Business discipline.  Several factors have contributed to IT Security becoming a core business and organizational enabler including, but not limited to, increased external threats and increased regulatory pressure. Security is also viewed as a key enabler for strategic corporate activities such as mergers and acquisitions.Now, the challenge for senior security professionals is to develop an ongoing dialogue within their organizations about the importance of information security and how it can impact their organization's strategic objectives/mission. The importance of conducting regular “Security Assessments” across the IT and physical infrastructure has become increasingly important. Security standards and frameworks, such as the international standard ISO 27001, are increasingly being adopted by organizations and their business partners as proof of their security posture and “Security Assessments” are a great way to ensure a continued alignment to these frameworks.Oracle offers a number of different security assessment covering a broad range of technologies. Some of these are short engagements conducted for free with our strategic customers and partners. Others are longer term paid engagements delivered by Oracle Consulting Services or one of our partners. The goal of a security assessment, (also known as a security audit or security review), is to ensure that necessary security controls are integrated into the design and implementation of a project, application or technology.  A properly completed security assessment should provide documentation outlining any security gaps that exist in an infrastructure and the associated risks for those gaps. With that knowledge, an organization can choose to either mitigate, transfer, avoid or accept the risk. One example of an Oracle offering is a Security Readiness Assessment:The Oracle Security Readiness Assessment is a practical security architecture review focused on aligning an organization’s enterprise security architecture to their business principals and strategic objectives. The service will establish a multi-phase security architecture roadmap focused on supporting new and existing business initiatives.Offering OverviewThe Security Readiness Assessment will: Define an organization’s current security posture and provide a roadmap to a desired future state architecture by mapping  security solutions to business goals Incorporate commonly accepted security architecture concepts to streamline an organization’s security vision from strategy to implementation Define the people, process and technology implications of the desired future state architecture The objective is to deliver cohesive, best practice security architectures spanning multiple domains that are unique and specific to the context of your organization. Offering DetailsThe Oracle Security Readiness Assessment is a multi-stage process with a dedicated Oracle Security team supporting your organization.  During the course of this free engagement, the team will focus on the following: Review your current business operating model and supporting IT security structures and processes Partner with your organization to establish a future state security architecture leveraging Oracle’s reference architectures, capability maps, and best practices Provide guidance and recommendations on governance practices for the rollout and adoption of your future state security architecture Create an initial business case for the adoption of the future state security architecture If you are interested in finding out more, ask your Sales Consultant or Account Manager for details.

    Read the article

  • NTFS Corruption: Files created in Linux corrupted when Windows Boots

    - by Logan Mayfield
    I'm getting some file loss and corruption on my Win7/Ubuntu 12.04 dual boot setup. I have a large shared NTFS partition. I have my Windows Docs/Music/etc. directories on that file and have the comparable directors in Linux setup as a sym. link. I'm using ntfs-3g on the linux side of things to manage the ntfs partition. The shared partition is on a logical partition along with my Linux /home / and /swap partitions. The ntfs partition is mounted at boot time via fstab with the following options: ntfs-3g users,nls=utf8,locale=en_US.UTF-8,exec,rw The problem seems to be confined to newly created and recently edited files. I have not see data loss or corruption when creating/editing files in Windows and then moving over to Ubuntu. I've been using the sync command aggressively in Ubuntu to try to ensure everything is getting written to the HDD. I do not use hibernate in Windows so I know it's not the usual missing files due to Hibernation problem. I'm not seeing any mount related issues on dmesg. Most recently I had a set of files related to a LaTeX document go bad. Some of them show up in Ubuntu but I am unable to delete them. In the GUI file browser they are given thumbnails associated with files I created on my last boot of Windows. To be more specific: I created a few png files in Windows. The files corrupted by that Windows boot are associated with running PdfLatex on a file and are not image files. However, two of the corrupted files show up with the thumbnail image of one of the previously mentioned png files. The png files are not in the same directory as the latex files but they are both win the Document Folder tree. I've had sucess with using NTFS for shared data in the past and am hoping there's some quirk here I'm missing and it's not just bad luck. On one hand this appears to be some kind of Windows problem as data loss occurs when I boot to Windows after having worked in Ubuntu for a while. However, I'm assuming it's more on the Ubuntu end as it requires the special NTFS drivers. Edit for more info: This is a Lenovo Thinkpad L430. Purchased new in the last month. So it's a fairly fresh install. Many of the files on the shared partition were copied over from a previous NTFS formatted shared partition on another HDD. As requested: here's a sample chkdsk log. Some of the files its mentioning were files that got deleted off the partition while in Ubuntu. Others were created/edited but not deleted. Checking file system on D: Volume dismounted. All opened handles to this volume are now invalid. Volume label is Files. CHKDSK is verifying files (stage 1 of 3)... Attribute record of type 0x80 and instance tag 0x2 is cross linked starting at 0x789f47 for possibly 0x21 clusters. Some clusters occupied by attribute of type 0x80 and instance tag 0x2 in file 0x42 is already in use. Deleting corrupt attribute record (128, "") from file record segment 66. 86496 file records processed. File verification completed. 385 large file records processed. 0 bad file records processed. 0 EA records processed. 0 reparse records processed. CHKDSK is verifying indexes (stage 2 of 3)... Deleted invalid filename Screenshot from 2012-09-09 09:51:27.png (72) in directory 46. The NTFS file name attribute in file 0x48 is incorrect. 53 00 63 00 72 00 65 00 65 00 6e 00 73 00 68 00 S.c.r.e.e.n.s.h. 6f 00 74 00 20 00 66 00 72 00 6f 00 6d 00 20 00 o.t. .f.r.o.m. . 32 00 30 00 31 00 32 00 2d 00 30 00 39 00 2d 00 2.0.1.2.-.0.9.-. 30 00 39 00 20 00 30 00 39 00 3a 00 35 00 31 00 0.9. .0.9.:.5.1. 3a 00 32 00 37 00 2e 00 70 00 6e 00 67 00 0d 00 :.2.7...p.n.g... 00 00 00 00 00 00 90 94 49 1f 5e 00 00 80 d4 00 ......I.^.... File 72 has been orphaned since all its filenames were invalid Windows will recover the file in the orphan recovery phase. Correcting minor file name errors in file 72. Index entry found.000 of index $I30 in file 0x5 points to unused file 0x11. Deleting index entry found.000 in index $I30 of file 5. Index entry found.001 of index $I30 in file 0x5 points to unused file 0x16. Deleting index entry found.001 in index $I30 of file 5. Index entry found.002 of index $I30 in file 0x5 points to unused file 0x15. Deleting index entry found.002 in index $I30 of file 5. Index entry DOWNLO~1 of index $I30 in file 0x28 points to unused file 0x2b6. Deleting index entry DOWNLO~1 in index $I30 of file 40. Unable to locate the file name attribute of index entry Screenshot from 2012-09-09 09:51:27.png of index $I30 with parent 0x2e in file 0x48. Deleting index entry Screenshot from 2012-09-09 09:51:27.png in index $I30 of file 46. An index entry of index $I30 in file 0x32 points to file 0x151e8 which is beyond the MFT. Deleting index entry latexsheet.tex in index $I30 of file 50. An index entry of index $I30 in file 0x58bc points to file 0x151eb which is beyond the MFT. Deleting index entry D8CZ82PK in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151f7 which is beyond the MFT. Deleting index entry EGA4QEAX in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151e9 which is beyond the MFT. Deleting index entry NGTB469M in index $I30 of file 22716. An index entry of index $I30 in file 0x58bc points to file 0x151fb which is beyond the MFT. Deleting index entry WU5RKXAB in index $I30 of file 22716. Index entry comp220-lab3.synctex.gz of index $I30 in file 0xda69 points to unused file 0xd098. Deleting index entry comp220-lab3.synctex.gz in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp220-numberGrammars.aux of index $I30 with parent 0xda69 in file 0xa276. Deleting index entry comp220-numberGrammars.aux in index $I30 of file 55913. The file reference 0x500000000cd43 of index entry comp220-numberGrammars.out of index $I30 with parent 0xda69 is not the same as 0x600000000cd43. Deleting index entry comp220-numberGrammars.out in index $I30 of file 55913. The file reference 0x500000000cd45 of index entry comp220-numberGrammars.pdf of index $I30 with parent 0xda69 is not the same as 0xc00000000cd45. Deleting index entry comp220-numberGrammars.pdf in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15290 which is beyond the MFT. Deleting index entry gram.aux in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15291 which is beyond the MFT. Deleting index entry gram.out in index $I30 of file 55913. An index entry of index $I30 in file 0xda69 points to file 0x15292 which is beyond the MFT. Deleting index entry gram.pdf in index $I30 of file 55913. Unable to locate the file name attribute of index entry comp230-quiz1.synctex.gz of index $I30 with parent 0xda6f in file 0xd183. Deleting index entry comp230-quiz1.synctex.gz in index $I30 of file 55919. An index entry of index $I30 in file 0xf3cc points to file 0x15283 which is beyond the MFT. Deleting index entry require-transform.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf3cc points to file 0x15284 which is beyond the MFT. Deleting index entry set.rkt in index $I30 of file 62412. An index entry of index $I30 in file 0xf497 points to file 0x15280 which is beyond the MFT. Deleting index entry logger.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15281 which is beyond the MFT. Deleting index entry misc.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf497 points to file 0x15282 which is beyond the MFT. Deleting index entry more-scheme.rkt in index $I30 of file 62615. An index entry of index $I30 in file 0xf5bf points to file 0x15285 which is beyond the MFT. Deleting index entry core-layout.rkt in index $I30 of file 62911. An index entry of index $I30 in file 0xf5e0 points to file 0x15286 which is beyond the MFT. Deleting index entry ref.scrbl in index $I30 of file 62944. An index entry of index $I30 in file 0xf6f0 points to file 0x15287 which is beyond the MFT. Deleting index entry base-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15288 which is beyond the MFT. Deleting index entry html-properties.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x15289 which is beyond the MFT. Deleting index entry html-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528b which is beyond the MFT. Deleting index entry latex-prefix.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528c which is beyond the MFT. Deleting index entry latex-render.rkt in index $I30 of file 63216. An index entry of index $I30 in file 0xf6f0 points to file 0x1528e which is beyond the MFT. Deleting index entry scribble.tex in index $I30 of file 63216. An index entry of index $I30 in file 0xf717 points to file 0x1528a which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63255. An index entry of index $I30 in file 0xf721 points to file 0x1528d which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63265. An index entry of index $I30 in file 0xf764 points to file 0x1528f which is beyond the MFT. Deleting index entry lang.rkt in index $I30 of file 63332. An index entry of index $I30 in file 0x14261 points to file 0x15270 which is beyond the MFT. Deleting index entry fddff3ae9ae2221207f144821d475c08ec3d05 in index $I30 of file 82529. An index entry of index $I30 in file 0x14621 points to file 0x15268 which is beyond the MFT. Deleting index entry FETCH_HEAD in index $I30 of file 83489. An index entry of index $I30 in file 0x14650 points to file 0x15272 which is beyond the MFT. Deleting index entry 86 in index $I30 of file 83536. An index entry of index $I30 in file 0x14651 points to file 0x15266 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.idx in index $I30 of file 83537. An index entry of index $I30 in file 0x14651 points to file 0x15265 which is beyond the MFT. Deleting index entry pack-7f54ce9f8218d2cd8d6815b8c07461b50584027f.pack in index $I30 of file 83537. An index entry of index $I30 in file 0x146f1 points to file 0x15275 which is beyond the MFT. Deleting index entry master in index $I30 of file 83697. An index entry of index $I30 in file 0x146f6 points to file 0x15276 which is beyond the MFT. Deleting index entry remotes in index $I30 of file 83702. An index entry of index $I30 in file 0x1477d points to file 0x15278 which is beyond the MFT. Deleting index entry pad.rkt in index $I30 of file 83837. An index entry of index $I30 in file 0x14797 points to file 0x1527c which is beyond the MFT. Deleting index entry pad1.rkt in index $I30 of file 83863. An index entry of index $I30 in file 0x14810 points to file 0x1527d which is beyond the MFT. Deleting index entry cm.rkt in index $I30 of file 83984. An index entry of index $I30 in file 0x14926 points to file 0x1527e which is beyond the MFT. Deleting index entry multi-file-search.rkt in index $I30 of file 84262. An index entry of index $I30 in file 0x149ef points to file 0x1527f which is beyond the MFT. Deleting index entry com.rkt in index $I30 of file 84463. An index entry of index $I30 in file 0x14b47 points to file 0x15202 which is beyond the MFT. Deleting index entry COMMIT_EDITMSG in index $I30 of file 84807. An index entry of index $I30 in file 0x14b47 points to file 0x15279 which is beyond the MFT. Deleting index entry index in index $I30 of file 84807. An index entry of index $I30 in file 0x14b4c points to file 0x15274 which is beyond the MFT. Deleting index entry master in index $I30 of file 84812. An index entry of index $I30 in file 0x14b61 points to file 0x1520b which is beyond the MFT. Deleting index entry 02 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1525a which is beyond the MFT. Deleting index entry 28 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15208 which is beyond the MFT. Deleting index entry 29 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521f which is beyond the MFT. Deleting index entry 2c in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15261 which is beyond the MFT. Deleting index entry 2e in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f0 which is beyond the MFT. Deleting index entry 45 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1523e which is beyond the MFT. Deleting index entry 47 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151e5 which is beyond the MFT. Deleting index entry 49 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15214 which is beyond the MFT. Deleting index entry 58 in index $I30 of file 84833. Index entry 6e of index $I30 in file 0x14b61 points to unused file 0xd182. Deleting index entry 6e in index $I30 of file 84833. Unable to locate the file name attribute of index entry a0 of index $I30 with parent 0x14b61 in file 0xd29c. Deleting index entry a0 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1521b which is beyond the MFT. Deleting index entry cd in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15249 which is beyond the MFT. Deleting index entry d6 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15242 which is beyond the MFT. Deleting index entry df in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x15227 which is beyond the MFT. Deleting index entry ea in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x1522e which is beyond the MFT. Deleting index entry f3 in index $I30 of file 84833. An index entry of index $I30 in file 0x14b61 points to file 0x151f2 which is beyond the MFT. Deleting index entry ff in index $I30 of file 84833. An index entry of index $I30 in file 0x14b62 points to file 0x15254 which is beyond the MFT. Deleting index entry 1ed39b36ad4bd48c91d22cbafd7390f1ea38da in index $I30 of file 84834. An index entry of index $I30 in file 0x14b75 points to file 0x15224 which is beyond the MFT. Deleting index entry 96260247010fe9811fea773c08c5f3a314df3f in index $I30 of file 84853. An index entry of index $I30 in file 0x14b79 points to file 0x15219 which is beyond the MFT. Deleting index entry 8f689724ca23528dd4f4ab8b475ace6edcb8f5 in index $I30 of file 84857. An index entry of index $I30 in file 0x14b7c points to file 0x15223 which is beyond the MFT. Deleting index entry 1df17cf850656be42c947cba6295d29c248d94 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15217 which is beyond the MFT. Deleting index entry 31db8a3c72a3e44769bbd8db58d36f8298242c in index $I30 of file 84860. An index entry of index $I30 in file 0x14b7c points to file 0x15267 which is beyond the MFT. Deleting index entry 8e1254d755ff1882d61c07011272bac3612f57 in index $I30 of file 84860. An index entry of index $I30 in file 0x14b82 points to file 0x15246 which is beyond the MFT. Deleting index entry f959bfaf9643c1b9e78d5ecf8f669133efdbf3 in index $I30 of file 84866. An index entry of index $I30 in file 0x14b88 points to file 0x151fe which is beyond the MFT. Deleting index entry 7e9aa15b1196b2c60116afa4ffa613397f2185 in index $I30 of file 84872. An index entry of index $I30 in file 0x14b8a points to file 0x151ea which is beyond the MFT. Deleting index entry 73cb0cd248e494bb508f41b55d862e84cdd6e0 in index $I30 of file 84874. An index entry of index $I30 in file 0x14b8e points to file 0x15264 which is beyond the MFT. Deleting index entry bd555d9f0383cc14c317120149e9376a8094c4 in index $I30 of file 84878. An index entry of index $I30 in file 0x14b96 points to file 0x15212 which is beyond the MFT. Deleting index entry 630dba40562d991bc6cbb6fed4ba638542e9c5 in index $I30 of file 84886. An index entry of index $I30 in file 0x14b99 points to file 0x151ec which is beyond the MFT. Deleting index entry 478be31ca8e538769246e22bba3330d81dc3c8 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b99 points to file 0x15258 which is beyond the MFT. Deleting index entry 66c60c0a0f3253bc9a5112697e4cbb0dfc0c78 in index $I30 of file 84889. An index entry of index $I30 in file 0x14b9c points to file 0x15238 which is beyond the MFT. Deleting index entry 1c7ceeddc2953496f9ffbfc0b6fb28846e3fe3 in index $I30 of file 84892. An index entry of index $I30 in file 0x14b9c points to file 0x15247 which is beyond the MFT. Deleting index entry ae6e32ffc49d897d8f8aeced970a90d3653533 in index $I30 of file 84892. An index entry of index $I30 in file 0x14ba0 points to file 0x15233 which is beyond the MFT. Deleting index entry f71c7d874e45179a32e138b49bf007e5bbf514 in index $I30 of file 84896. Index entry 2e04fefbd794f050d45e7a717d009e39204431 of index $I30 in file 0x14ba7 points to unused file 0xd097. Deleting index entry 2e04fefbd794f050d45e7a717d009e39204431 in index $I30 of file 84903. An index entry of index $I30 in file 0x14baa points to file 0x15241 which is beyond the MFT. Deleting index entry 0dda7dec1c635cd646dfef308e403c2843d5dc in index $I30 of file 84906. An index entry of index $I30 in file 0x14baa points to file 0x151fc which is beyond the MFT. Deleting index entry 98151e654dd546edcfdec630bc82d90619ac8e in index $I30 of file 84906. An index entry of index $I30 in file 0x14bb1 points to file 0x151e9 which is beyond the MFT. Deleting index entry 1997c5be62ffeebc99253cced7608415e38e4e in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x1521d which is beyond the MFT. Deleting index entry 6bf3aedefd3ac62d9c49cad72d05e8c0ad242c in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb1 points to file 0x151f4 which is beyond the MFT. Deleting index entry 907b755afdca14c00be0010962d0861af29264 in index $I30 of file 84913. An index entry of index $I30 in file 0x14bb3 points to file 0x15218 which is beyond the MFT. Deleting index entry

    Read the article

  • SQL SERVER – When are Statistics Updated – What triggers Statistics to Update

    - by pinaldave
    If you are an SQL Server Consultant/Trainer involved with Performance Tuning and Query Optimization, I am sure you have faced the following questions many times. When is statistics updated? What is the interval of Statistics update? What is the algorithm behind update statistics? These are the puzzling questions and more. I searched the Internet as well many official MS documents in order to find answers. All of them have provided almost similar algorithm. However, at many places, I have seen a bit of variation in algorithm as well. I have finally compiled the list of various algorithms and decided to share what was the most common “factor” in all of them. I would like to ask for your suggestions as whether following the details, when Statistics is updated, are accurate or not. I will update this blog post with accurate information after receiving your ideas. The answer I have found here is when statistics are expired and not when they are automatically updated. I need your help here to answer when they are updated. Permanent table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in a table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Temporary table If the table has no rows, statistics is updated when there is a single change in table. If the number of rows in table is less than 6, statistics is updated for every 6 changes in table. If the number of rows in table is less than 500, statistics is updated for every 500 changes in table. If the number of rows in table is more than 500, statistics is updated for every 500+20% of rows changes in table. Table variable There is no statistics for Table Variables. If you want to read further about statistics, I suggest that you read the white paper Statistics Used by the Query Optimizer in Microsoft SQL Server 2008. Let me know your opinions about statistics, as well as if there is any update in the above algorithm. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, Readers Question, SQL, SQL Authority, SQL Optimization, SQL Performance, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: SQL Statistics

    Read the article

  • Open the LOV of af:inputListOfValues with a double click

    - by frank.nimphius
    To open the LOV popup of an af:inputListOfValues component in ADF Faces, you either click the magnifier icon to the right of the input field or tab onto the icon and press the Enter key. If you want to open the same dialog in response to a user double click into the LOV input field, JavaScript is a friend. For this solution, I assume you created an editable table or input form that is based on a View Object that contains at least one attribute that has a model driven list of values defined. The Default List Type is should be set to Input Text with List of Values so that when the form or table gets created, the attribute is rendered by the af:inputListOfValues component. To implement the use case, drag a Client Listener component from the Operations accordion in the Component Palette and drop it onto the af:inputListOfValues component in the page. In the opened Insert Client Listener dialog, define the Method as handleLovOnDblclickand choose dblClick in the select list for the Type attribute. Add the following code snippet to the page source directly below the af:document tag. <af:document id="d1">      <af:resource type="javascript">     function handleLovOnDblclick(evt){             var lovComp = evt.getSource();             if (lovComp instanceof AdfRichInputListOfValues &&          lovComp.getReadOnly()==false){           AdfLaunchPopupEvent.queue(lovComp,true);        }     }      </af:resource> The JavaScript function is called whenever the user clicks into the LOV field. It gets the source component reference from the event object that is passed into the function and verifies the LOV component is not read only. It then queues the launch event for the LOV popup to open. The page source for the LOV component is shown below: <af:inputListOfValues id="departmentIdId" … >   <f:validator binding="…"/>   …  <af:clientListener method="handleLovOnDblclick" type="dblClick"/> </af:inputListOfValues>

    Read the article

  • SQL SERVER – Stored Procedure and Transactions

    - by pinaldave
    I just overheard the following statement – “I do not use Transactions in SQL as I use Stored Procedure“. I just realized that there are so many misconceptions about this subject. Transactions has nothing to do with Stored Procedures. Let me demonstrate that with a simple example. USE tempdb GO -- Create 3 Test Tables CREATE TABLE TABLE1 (ID INT); CREATE TABLE TABLE2 (ID INT); CREATE TABLE TABLE3 (ID INT); GO -- Create SP CREATE PROCEDURE TestSP AS INSERT INTO TABLE1 (ID) VALUES (1) INSERT INTO TABLE2 (ID) VALUES ('a') INSERT INTO TABLE3 (ID) VALUES (3) GO -- Execute SP -- SP will error out EXEC TestSP GO -- Check the Values in Table SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO Now, the main point is: If Stored Procedure is transactional then, it should roll back complete transactions when it encounters any errors. Well, that does not happen in this case, which proves that Stored Procedure does not only provide just the transactional feature to a batch of T-SQL. Let’s see the result very quickly. It is very clear that there were entries in table1 which are not shown in the subsequent tables. If SP was transactional in terms of T-SQL Query Batches, there would be no entries in any of the tables. If you want to use Transactions with Stored Procedure, wrap the code around with BEGIN TRAN and COMMIT TRAN. The example is as following. CREATE PROCEDURE TestSPTran AS BEGIN TRAN INSERT INTO TABLE1 (ID) VALUES (11) INSERT INTO TABLE2 (ID) VALUES ('b') INSERT INTO TABLE3 (ID) VALUES (33) COMMIT GO -- Execute SP EXEC TestSPTran GO -- Check the Values in Tables SELECT * FROM TABLE1; SELECT * FROM TABLE2; SELECT * FROM TABLE3; GO -- Clean up DROP TABLE Table1 DROP TABLE Table2 DROP TABLE Table3 GO In this case, there will be no entries in any part of the table. What is your opinion about this blog post? Please leave your comments about it here. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Stored Procedure, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQLAuthority News – Milestone of 1300th Post and A Few Updates

    - by pinaldave
    Today is my 1300th blog post and I realize that my blog has been quite running such a long journey. I have been writing for a lengthy time on this tech blog. Today I would like to go back and briefly recall the posts that were part of my blog’s history. Read all list of all my blog posts here. This blog only started as a list of personal bookmarks. I used to just write down scripts on the blog for my personal use. I was the one who wrote many scripts here for the servers that I was maintaining to keep them polished. I have included many links in my first blog posts which I view as just a collection of bookmarks on my very own blog; no intentions of publishing other contents besides the scripts, at all. Gradually, I realized that people read my blog and follow the advices which were supposedly meant only for me. I tried to write a code and a script which are generic in nature, so anyone can just use it right away. Nothing is perfect. When I was writing the last 1299 posts (and having 14 Million+ views), I have made a few mistakes and tweaks that I thoughtfully accepted. These are corrections that were pointed out by many kind souls and readers like you, which have helped me develop wonderful blogging experiences. I am very glad that I have this blog wherein I can express myself. After all, I would have not reached where I am today if I have kept myself worried in terms of expressing my knowledge and understanding SQL Server. I am happy that many of you appreciated my efforts and supported me all the way, which also helped me achieve where I am now. I promise to learn more about this fascinating subject and, of course, continue to share whatever I will learn to my dear readers. Again, I really thank YOU for reading this blog and supporting the SQL community. Reference: Pinal Dave (http://blog.SQLAuthority.com), Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology Tagged: SQL Milestone

    Read the article

  • How to configure Visual Studio 2010 code coverage for ASP.NET MVC unit tests

    - by DigiMortal
    I just got Visual Studio 2010 code coverage work with ASP.NET MVC application unit tests. Everything is simple after you have spent some time with forums, blogs and Google. To save your valuable time I wrote this posting to guide you through the process of making code coverage work with ASP.NET MVC application unit tests. After some fighting with Visual Studio I got everything to work as expected. I am still not very sure why users must deal with this mess, but okay – I survived it. Before you start configuring Visual Studio I expect your solution meets the following needs: there are at least one library that will be tested, there is at least on library that contains tests to be run, there are some classes and some tests for them, and, of course, you are using version of Visual Studio 2010 that supports tests (I have Visual Studio 2010 Ultimate). Now open the following screenshot to separate windows and follow the steps given below. Visual Studio 2010 Test Settings window. Click on image to see it at original size.  Double click on Local.testsettings under Solution Items. Test settings window will be opened. Select “Data and Diagnostics” from left pane. Mark checkboxes “ASP.NET Profiler” and “Code Coverage”. Move cursor to “Code Coverage” line and press Configure button or make double click on line. Assemblies selection window will be opened. Mark checkboxes that are located before assemblies about what you want code coverage reports and apply settings. Save your project and close Visual Studio. Run Visual Studio as Administrator and run tests. NB! Select Test => Run => Tests in Current Context from menu. When tests are run you can open code coverage results by selecting Test => Windows => Code Coverage Results from menu. Here you can see my example test results. Visual Studio 2010 Test Results window. All my tests passed this time. :) Click on image to see it at original size.  And here are the code coverage results. Visual Studio 2101 Code Coverage Results. I need a lot more tests for sure. Click on image to see it at original size.  As you can see everything was pretty simple. But it took me sometime to figure out how to get everything work as expected. Problems? You may face some problems when making code coverage work. Here is my short list of possible problems. Make sure you have all assemblies available for code coverage. In some cases it needs more libraries to be referenced as you currently have. By example, I had to add some more Enterprise Library assemblies to my project. You can use EventViewer to discover errors that where given during testing. Make sure you selected all testable assemblies from Code Coverage settings like shown above. Otherwise you may get empty results. Tests with code coverage are slower because we need ASP.NET profiler. If your machine slows down then try to free more resources.

    Read the article

  • SQLAuthority News – Monthly Roundup of Best SQL Posts

    - by pinaldave
    After receiving lots of requests from different readers for long time I have decided to write first monthly round up. If all of you like it I will continue writing the same every month. In fact, I really like the idea as I was able to go back and read all of my posts written in this month. This month was started with answering one of the most common question asked me to about What is Adventureworks? Many of you know the answer but to the surprise more number of the reader did not know the answer. There were few extra blog post which were in the same line as following. SQL SERVER – The Difference between Dual Core vs. Core 2 Duo SQLAuthority News – Wireless Router Security and Attached Devices – Complex Password SQL SERVER – DATE and TIME in SQL Server 2008 DMVs are also one of the most handy tools available in SQL Server, I have written following blog post where I have used DMV in scripts. SQL SERVER – Get Latest SQL Query for Sessions – DMV SQL SERVER – Find Most Expensive Queries Using DMV SQL SERVER – List All the DMV and DMF on Server I was able to write two follow-up of my earlier series where I was finding the size of the indexes using different SQL Scripts. And in fact one of the article Powershell is used as well. This was my very first attempt to use Powershell. SQL SERVER – Size of Index Table for Each Index – Solution 2 SQL SERVER – Size of Index Table for Each Index – Solution 3 – Powershell SQL SERVER – Four Posts on Removing the Bookmark Lookup – Key Lookup Without realizing I wrote series of the blog post on disabled index here is its complete list. I plan to write one more follow-up list on the same. SQL SERVER – Disable Clustered Index and Data Insert SQL SERVER – Understanding ALTER INDEX ALL REBUILD with Disabled Clustered Index SQL SERVER – Disabled Index and Update Statistics Two special post which I found very interesting to write are as following. SQL SERVER – SHRINKFILE and TRUNCATE Log File in SQL Server 2008 SQL SERVER – Simple Example of Snapshot Isolation – Reduce the Blocking Transactions In personal adventures, I won the Community Impact Award for Last Year from Microsoft. Please leave your comment about how can I improve this round up or what more details I should include in the same. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQLAuthority News, T SQL, Technology

    Read the article

  • RegexClean Transformation

    Use the power of regular expressions to cleanse your data right there inside the Data Flow. This transformation includes a full user interface for simple configuration, as well as advanced features such as error output configuration. Two regular expressions are used, a match expression and a replace expression. The transformation is designed around the named capture groups or match groups, and even supports multiple expressions. This allows for rich and complex expressions to be built, all through an easy to reuse transformation where a bespoke Script Component was previously the only alternative. Some simple properties are available for each column selected – Behaviour The two behaviour modes offer similar functionality but with a difference. Replace, replaces tokens with the input, and Emit overwrites the whole string. Cascade Cascade allows you to define multiple expressions, each on a new line. The match expression will be processed into one operation per line, which are then processed in order at run-time. Multiple replace expressions can also be specified, again each on a new line. If there is no corresponding replace expression for a match expression line, then the last replace expression will be used instead. It is common to have multiple match expressions, but only a single replace expression. Match Expression The expression used to define the named capture groups. This is where you can analyse the data, and tag or name elements within it as found by the match expression. Replace Expression The replace determines the final output. It will reference the named groups from the match expression and assembles them into the final output. If you want to use regular expressions to validate data then try the Regular Expression Transformation. Quick Start Guide Select a column. A new output column is created for each selected column; there is no option for in-place replacement of column values. One input column can be used to populate multiple output columns, just select the column again in the lower grid, using the Input Columns drop-down selector. Amend the output column name and size as required. They default to the same as the input column selected. Amend the behaviour as required, the default is Replace. Amend the cascade option as required, the default is true. Finally enter your match and replace regular expressions Quick Sample #1 Parse an email address and extract the user and domain portions. Format as a web address passing the user portion as a URL parameter. This uses two match groups, user and host, which correspond to the text before the @ and after it respectively. Behaviour is Emit, and cascade of false, we only have a single match expression. Match Expression ^(?<user>[^@]+)@(?<host>.+)$ Replace Expression - http://www.${host}?user=${user} Results Sample Input Sample Output [email protected] http://www.adventure-works.com?user=zheng0 The component is provided as an MSI file, however to complete the installation, you will have to add the transformation to the Visual Studio toolbox manually. Right-click the toolbox, and select Choose Items.... Select the SSIS Data Flow Items tab, and then check the RegexClean Transformation from the list. Downloads The RegexClean Transformation is available for both SQL Server 2005 and SQL Server 2008. Please choose the version to match your SQL Server version, or you can install both versions and use them side by side if you have both SQL Server 2005 and SQL Server 2008 installed. RegexClean Transformation for SQL Server 2005 RegexClean Transformation for SQL Server 2008 Version History SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) SQL Server 2005 Version 1.0.0.105 - Public Release (28 Jan 2008) Screenshot

    Read the article

  • SQL SERVER – Fix Error: Microsoft OLE DB Provider for SQL Server error ’80040e07' or Microsoft SQL Native Client error ’80040e07'

    - by pinaldave
    I quite often receive questions where users are looking for solution to following error: Microsoft OLE DB Provider for SQL Server error ’80040e07′ Syntax error converting datetime from character string. OR Microsoft SQL Native Client error ’80040e07′ Syntax error converting datetime from character string. If you have ever faced above error – I have a very simple solution for you. The solution is being very check date which is inserted in the datetime column. This error often comes up when application or user is attempting to enter an incorrect date into the datetime field. Here is one of the examples – one of the reader was using classing ASP Application with OLE DB provider for SQL Server. When he tried to insert following script he faced above mentioned error. INSERT INTO TestTable (ID, MyDate) VALUES (1, '01-Septeber-2013') The reason for the error was simple as he had misspelled September word. Upon correction of the word, he was able to successfully insert the value and error was not there. Incorrect values or the typo’s are not the only reason for this error. There can be issues with cast or convert as well. If you try to attempt following code using SQL Native Client or in your application you will also get similar errors. SELECT CONVERT (datetime, '01-Septeber-2013', 112) The reason here is very simple, any conversion attempt or any other kind of operation on incorrect date/time string can lead to the above error. If you not using embeded dynamic code in your application language but using attempting similar operation on incorrect datetime string you will get following error. Msg 241, Level 16, State 1, Line 2 Conversion failed when converting date and/or time from character string. Remember: Check your values of the string when you are attempting to convert them to string – either there can be incorrect values or they may be incorrectly formatted. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL DateTime, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • SQL SERVER – WRITELOG – Wait Type – Day 17 of 28

    - by pinaldave
    WRITELOG is one of the most interesting wait types. So far we have seen a lot of different wait types, but this log type is associated with log file which makes it interesting to deal with. From Book On-Line: WRITELOG Occurs while waiting for a log flush to complete. Common operations that cause log flushes are checkpoints and transaction commits. WRITELOG Explanation: This wait type is usually seen in the heavy transactional database. When data is modified, it is written both on the log cache and buffer cache. This wait type occurs when data in the log cache is flushing to the disk. During this time, the session has to wait due to WRITELOG. I have recently seen this wait type’s persistence at my client’s place, where one of the long-running transactions was stopped by the user causing it to roll back. In the future, I will see if I could re-create this situation once again on my machine to validate the relation. Reducing WRITELOG wait: There are several suggestions to reduce this wait stats: Move Transaction Log to Separate Disk from mdf and other files. Avoid cursor-like coding methodology and frequent committing of statements. Find the most active file based on IO stall time based on the script written over here. You can also use fn_virtualfilestats to find IO-related issues using the script mentioned over here. Check the IO-related counters (PhysicalDisk:Avg.Disk Queue Length, PhysicalDisk:Disk Read Bytes/sec and PhysicalDisk :Disk Write Bytes/sec) for additional details. Read about them over here. There are two excellent resources by Paul Randal, I suggest you understand the subject from those videos. The links to videos are here and here. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussion of Wait Stats in this blog is generic and varies from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • SQL SERVER – FT_IFTS_SCHEDULER_IDLE_WAIT – Full Text – Wait Type – Day 13 of 28

    - by pinaldave
    In the last few days during this series, I got many question about this Wait type. It would be great if you read my original related wait stats query in the first post because I have filtered it out in WHERE clause. However, I still get questions about this being one of the most wait types they encounter. The truth is, this is a background task processing and it really does not matter and it should be filtered out. There are many new Wait types related to Full Text Search that are introduced in SQL Server 2008. If you run the following query, you will be able to find them in the list. Currently there is not enough information for all of them available on BOL or any other place. But don’t worry; I will write an in-depth article when I learn more about them. SELECT * FROM sys.dm_os_wait_stats WHERE wait_type LIKE 'FT_%' The result set will contain following rows. FT_RESTART_CRAWL FT_METADATA_MUTEX FT_IFTSHC_MUTEX FT_IFTSISM_MUTEX FT_IFTS_RWLOCK FT_COMPROWSET_RWLOCK FT_MASTER_MERGE FT_IFTS_SCHEDULER_IDLE_WAIT We have understood so far that there is not much information available. But the problem is when you have this Wait type, what should you do?  The answer is to filter them out for the moment (i.e, do not pay attention on them) and focus on other pressing issues in wait stats or performance tuning. Here are two of my informal suggestions, which are totally independent from wait stats: Turn off the Full Text Search service in your system if you are  not necessarily using it on your server. Learn proper Full Text Search methodology. You can get Michael Coles’ book: Pro Full-Text Search in SQL Server 2008. Now I invite you to speak out your suggestions or any input regarding Full Text-related best practices and wait stats issue. Please leave a comment. Note: The information presented here is from my experience and there is no way that I claim it to be accurate. I suggest reading Book OnLine for further clarification. All the discussions of Wait Stats in this blog are generic and vary from system to system. It is recommended that you test this on a development server before implementing it to a production server. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQL Wait Stats, SQL Wait Types, T SQL, Technology

    Read the article

  • An increase to 3 Gig of RAM slows down Ubuntu 10.04 LTS

    - by williepabon
    I have Ubuntu 10.04 running from an external hard drive (installed on an enclosure) connected via USB port. Like a month or so ago, I increased RAM on my pc from 2 Gigs to 3 Gigs. This resulted on extremely long boot times and slow application loads. While I was understanding the nature of my problem, I posted various threads on this forum ( Questions # 188417, 188801), where I was advised to gather speed tests, and other info on my machine. I was also suggested that I might have problems with the RAM installed. Initially, I did not consider that possibility because: 1) I did a memory test with a diagnostic program from DELL (My pc is from Dell) 2) My pc works fine with Windows XP (the default OS), no problems with memory 3) My pc works fine when booting with Ubuntu 10.10 memory stick, no speed problems 4) My pc works fine when booting with Ubuntu 11.10 memory stick, no speed problems Anyway, I performed the memory tests suggested. But before doing it, and to check out any possibility of hardware issues on the hard drive, I did the following: (1) purchased a new hard drive enclosure and moved my hard drive to it, (2) purchased a new USB cable and used it to connect my hard drive/enclosure setup to a different USB port on my pc. Then, I performed speed tests with 1 Gig, 2 Gigs and 3 Gigs of RAM with my Ubuntu 10.04 OS. Ubuntu 10.04 worked well when booted with 1 Gig or 2 Gigs of RAM. When I increased to 3 Gigs, it slowed down to a crawl. I can't understand the relationship between an increase of 1 Gig and the effect it has in Ubuntu 10.04. This doesn't happen with Ubuntu 10.10 and 11.10. Unfortunately for me, Ubuntu 10.04 is my principal work operating system. So, I need a solution for this. Hardware and system information: DELL Precision 670 2 internal SATA Hard drives Audigy 2 ZS audio system Factory OS: Windows XP Professional SP3 NVidia 8400 GTS video card More info: williepabon@WP-WrkStation:~$ uname -a Linux WP-WrkStation 2.6.32-38-generic #83-Ubuntu SMP Wed Jan 4 11:13:04 UTC 2012 i686 GNU/Linux williepabon@WP-WrkStation:~$ lsb_release -a No LSB modules are available. Distributor ID: Ubuntu Description: Ubuntu 10.04.4 LTS Release: 10.04 Codename: lucid Speed test with the 3 Gigs of RAM installed: williepabon@WP-WrkStation:~$ sudo hdparm -tT /dev/sdc [sudo] password for williepabon: /dev/sdc: Timing cached reads: 84 MB in 2.00 seconds = 41.96 MB/sec Timing buffered disk reads: 4 MB in 3.81 seconds = 1.05 MB/sec This is a very slow transfer rate from a hard drive. I will really appreciate a solution or a work around for this problem. I know that that there are users that have Ubuntu 10.04 with 3 Gigs or more of RAM and they don't have this problem. Same question asked on Launchpad for reference.

    Read the article

  • SQL SERVER – Introduction to Adaptive ETL Tool – How adaptive is your ETL?

    - by pinaldave
    I am often reminded by the fact that BI/data warehousing infrastructure is very brittle and not very adaptive to change. There are lots of basic use cases where data needs to be frequently loaded into SQL Server or another database. What I have found is that as long as the sources and targets stay the same, SSIS or any other ETL tool for that matter does a pretty good job handling these types of scenarios. But what happens when you are faced with more challenging scenarios, where the data formats and possibly the data types of the source data are changing from customer to customer?  Let’s examine a real life situation where a health management company receives claims data from their customers in various source formats. Even though this company supplied all their customers with the same claims forms, they ended up building one-off ETL applications to process the claims for each customer. Why, you ask? Well, it turned out that the claims data from various regional hospitals they needed to process had slightly different data formats, e.g. “integer” versus “string” data field definitions.  Moreover the data itself was represented with slight nuances, e.g. “0001124” or “1124” or “0000001124” to represent a particular account number, which forced them, as I eluded above, to build new ETL processes for each customer in order to overcome the inconsistencies in the various claims forms.  As a result, they experienced a lot of redundancy in these ETL processes and recognized quickly that their system would become more difficult to maintain over time. So imagine for a moment that you could use an ETL tool that helps you abstract the data formats so that your ETL transformation process becomes more reusable. Imagine that one claims form represents a data item as a string – acc_no(varchar) – while a second claims form represents the same data item as an integer – account_no(integer). This would break your traditional ETL process as the data mappings are hard-wired.  But in a world of abstracted definitions, all you need to do is create parallel data mappings to a common data representation used within your ETL application; that is, map both external data fields to a common attribute whose name and type remain unchanged within the application. acc_no(varchar) is mapped to account_number(integer) expressor Studio first claim form schema mapping account_no(integer) is also mapped to account_number(integer) expressor Studio second claim form schema mapping All the data processing logic that follows manipulates the data as an integer value named account_number. Well, these are the kind of problems that that the expressor data integration solution automates for you.  I’ve been following them since last year and encourage you to check them out by downloading their free expressor Studio ETL software. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Business Intelligence, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL, SSIS

    Read the article

< Previous Page | 463 464 465 466 467 468 469 470 471 472 473 474  | Next Page >