Search Results

Search found 31902 results on 1277 pages for 'sql backup'.

Page 908/1277 | < Previous Page | 904 905 906 907 908 909 910 911 912 913 914 915  | Next Page >

  • Simple script to get referenced table and their column names

    - by Peter Larsson
    -- Setup user supplied parameters DECLARE @WantedTable SYSNAME   SET     @WantedTable = 'Sales.factSalesDetail'   -- Wanted table is "parent table" SELECT      PARSENAME(@WantedTable, 2) AS ParentSchemaName,             PARSENAME(@WantedTable, 1) AS ParentTableName,             cp.Name AS ParentColumnName,             OBJECT_SCHEMA_NAME(parent_object_id) AS ChildSchemaName,             OBJECT_NAME(parent_object_id) AS ChildTableName,             cc.Name AS ChildColumnName FROM        sys.foreign_key_columns AS fkc INNER JOIN  sys.columns AS cc ON cc.column_id = fkc.parent_column_id                 AND cc.object_id = fkc.parent_object_id INNER JOIN  sys.columns AS cp ON cp.column_id = fkc.referenced_column_id                 AND cp.object_id = fkc.referenced_object_id WHERE       referenced_object_id = OBJECT_ID(@WantedTable)   -- Wanted table is "child table" SELECT      OBJECT_SCHEMA_NAME(referenced_object_id) AS ParentSchemaName,             OBJECT_NAME(referenced_object_id) AS ParentTableName,             cc.Name AS ParentColumnName,             PARSENAME(@WantedTable, 2) AS ChildSchemaName,             PARSENAME(@WantedTable, 1) AS ChildTableName,             cp.Name AS ChildColumnName FROM        sys.foreign_key_columns AS fkc INNER JOIN  sys.columns AS cp ON cp.column_id = fkc.parent_column_id                 AND cp.object_id = fkc.parent_object_id INNER JOIN  sys.columns AS cc ON cc.column_id = fkc.referenced_column_id                 AND cc.object_id = fkc.referenced_object_id WHERE       parent_object_id = OBJECT_ID(@WantedTable)

    Read the article

  • Where is a good place to start to learn about custom caching in .Net

    - by John
    I'm looking to make some performance enhancements to our site, but I'm not sure exactly where to begin. We have some custom object caching, but I think that we can do better. Our Business We aggregate news stories on a news type of web site. We get approximately 500-1000 new stories per week. We have index pages that show various lists of the items and details pages that show the individual stories. Our Current Use case: Getting an Individual Story User makes a request The Data Access Layer(DAL) checks to see if the item is in cache and if item is fresh (15 minutes). If the item is not in cache or is not fresh, retrieve the item from SQL Server, save to cache and return to user. Problems with this approach The pull nature of caching means that users have to pay the waiting cost every time that the cache is refreshed. Once a story is published, it changes infrequently and I think that we should replace the pull model with something better. My initial thoughts My initial thought is that stories should ALL be stored locally in some type of dictionary. (Cache or is there another, better way?). If the story is not found, then make a trip to the database, update the local dictionary and send the item back. Since there may be occasional updates to stories, this should be an entirely process from the user. I watched a video by Brent Ozar, How StackOverflow Scales SQL Server, in which Brent states "the fastest database query is the one that you don't make". Where do I start? At this point, I don't know exactly what the solution is. Is it caching? Is there a better way of using local storage? Do I use a Dictionary, OrderedDictionary, List ? It seems daunting and I'm just looking for some good starting points to learn more about how to do this type of optimization.

    Read the article

  • Geek City: Clearing Plans for a Single Database

    - by Kalen Delaney
    I know Friday afternoon isn't the best time for blogging, as everyone is going home now, and by Monday morning, this post will be old news. But I'm not shutting down just yet, and a something came up this week that I just realized not everybody knew about, so I decided to blog it. Many (or most?) of you are aware that you can clear all cached plans using DBCC FREEPROCCACHE. In addition, there are certain configuration options, for which changing their values will cause all plans in cache to be removed....(read more)

    Read the article

  • Presenting at Roanoke Code Camp Saturday!

    - by andyleonard
    Introduction I am honored to once again be selected to present at Roanoke Code Camp ! An Introductory Topic One of my presentations is titled "I See a Control Flow Tab. Now What?" It's a Level 100 talk for those wishing to learn how to build their very first SSIS package. This highly-interactive, demo-intense presentation is for beginners and developers just getting started with SSIS. Attend and learn how to build SSIS packages from the ground up . Designing an SSIS Framework I'm also presenting...(read more)

    Read the article

  • Could my 64-bit server be somehow identifying itself as a 32-bit server?

    - by Deane
    Has anyone ever heard of a 64-bit OS identifying itself as a 32-bit OS? We have a Windows Server 2008 R2 x64 development server. We've been trying to activate it with a product key from MSDN, but it keeps telling us the the key is invalid. I've opened a ticket with MSDN for this. Then something odd happened -- I tried to install a 64-bit version of SQL Server 2005. After it extracted, we got this message: This version of hotfix.exe is not compatible with the version of Windows you're running. Check your computer's system information to see whether you need an x86 (32-bit) or x64 (64-bit) version of the program... Now, we're pretty sure this is a 64-bit OS. Computer Properties says: System Type: 64-bit Operating System Also, we have both a "Program Files" and a "Program Files (x64)" directory. I don't know how the product key activator or the SQL install program attempts to divine the type of OS, but could it be...wrong?

    Read the article

  • How to Manage Technical Employees

    - by Ajarn Mark Caldwell
    In my current position as Software Engineering Manager I have been through a lot of ups and downs with staffing, ranging from laying-off everyone who was on my team as we went through the great economic downturn in 2007-2008, to numerous rounds of interviewing and hiring contractors, full-time employees, and converting some contractors to employee status.  I have not yet blogged much about my experiences, but I plan to do that more in the next few months.  But before I do that, let me point you to a great article that somebody else wrote on The Unspoken Truth About Managing Geeks that really hits the target.  If you are a non-technical person who manages technical employees, you definitely have to read that article.  And if you are a technical person who has been promoted into management, this article can really help you do your job and communicate up the line of command about your team.  When you move into management with all the new and different demands put on you, it is easy to forget how things work in the tech subculture, and to lose touch with your team.  This article will help you remember what’s going on behind the scenes and perhaps explain why people who used to get along great no longer are, or why things seem to have changed since your promotion. I have to give credit to Andy Leonard (blog | twitter) for helping me find that article.  I have been reading his series of ramble-rants on managing tech teams, and the above article is linked in the first rant in the series, entitled Goodwill, Negative and Positive.  I have read a handful of his entries in this series and so far I pretty much agree with everything he has said, so of course I would encourage you to read through that series, too.

    Read the article

  • Your Next IT Job

    - by BuckWoody
    Some data professionals have worked (and plan to work) in the same place for a long time. In organizations large and small, the turnover rate just isn’t that high. This has not been my experience. About every 3-5 years I’ve changed either roles or companies. That might be due to the IT environment or my personality (or a mix of the two), but the point is that I’ve had many roles and worked for many companies large and small throughout my 27+ years in IT. At one point this might have been a detriment – a prospective employer looks at the resume and says “it seems you’ve moved around quite a bit.” But I haven’t found that to be the case all the time –in fact, in some cases the variety of jobs I’ve held has been an asset because I’ve seen what works (and doesn’t) in other environments, which can save time and money. So if you’re in the first camp – great! Stay where you are, and continue doing the work you love. but if you’re in the second, then this post might be useful. If you are planning on making a change, or perhaps you’ve hit a wall at your current location, you might start looking around for a better paying job – and there’s nothing wrong with that. We all try to make our lives better, and for some that involves more money. Money, however, isn’t always the primary motivator. I’ve gone to another job that doesn’t have as many benefits or has the same salary as the current job I’m working to gain more experience, get a better work/life balance and so on. It’s a mix of factors that only you know about. So I thought I would lay out a few advantages and disadvantages in the shops I’ve worked at. This post isn’t aimed at a single employer, but represents a mix of what I’ve experienced, and of course the opinions here are my own. You will most certainly have a different take – if so, please post a response! I also won’t mention a specific industry – I’ve worked everywhere from medical firms, legal offices, retail, billing centers, manufacturing, government, even to NASA. I’m focusing here mostly on size and composition. And I’m making some very broad generalizations here – I am fully aware that a small company might have great benefits and a large company might allow a lot of role flexibility.  your mileage may vary – and again, post those comments! Small Company To me a “small company” means around 100 people or less – sometimes a lot less. These can be really fun, frustrating places to to work. Advantages: a great deal of flexibility, a wide range of roles (often at the same time), a large degree of responsibility, immediate feedback, close relationships with co-workers, work directly with your customer. Disadvantages: Too much responsibility, little work/life balance, immature political structure, few (if any) benefits. If the business is family-owned, they can easily violate work/life boundaries. Medium Size company In my experience the next size company I would work for involves from a few hundred people to around five thousand. Advantages: Good mobility – fairly easy to get promoted, acceptable benefits, more defined responsibilities, better work/life balance, balanced load for expertise, but still the organizational structure is fairly simple to understand. Disadvantages: Pay is not always highest, rapid changes in structure as the organization grows, transient workforce. You may not be given the opportunity to work with another technology if someone already “owns” it. Politics are painful at this level as people try to learn how to do it. Large Company When you get into the tens of thousands of folks employed around the world, you’re in a large company. Advantages: Lots of room to move around – sometimes you can work (as I have) multiple jobs through the years and yet stay at the same company, building time for benefits, very defined roles, trained managers (yes, I know some of them are still awful – trust me – I DO know that), higher-end benefits, long careers possible, discounts at retailers and other “soft” benefits, prestige. For some, a higher level of politics (done professionally) is a good thing. Disadvantages: You could become another faceless name in the crowd, might not allow a great deal of flexibility,  large organizational changes might take away any control you have of your career. I’ve also seen large layoffs happen, and good people get let go while “dead weight” is retained. For some, a higher level of politics is distasteful. So what are your experiences? Share with the group! Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Oracle Index Skip Scan

    - by jchang
    There is a feature, called index skip scan that has been in Oracle since version 9i. When I across this, it seemed like a very clever trick, but not a critical capability. More recently, I have been advocating DW on SSD in approrpiate situations, and I am thinking this is now a valuable feature in keeping the number of nonclustered indexes to a minimum. Briefly, suppose we have an index with key columns: Col1 , Col2 , in that order. Obviously, a query with a search argument (SARG) on Col1 can use...(read more)

    Read the article

  • Less Useful Soft Skills

    - by andyleonard
    Introduction This post is the fifty-sixth part of a ramble-rant about the software business. The current posts in this series can be found on the series landing page . Over a career that spans decades, one encounters useful and “less useful” soft skills in the modern enterprise. I thought I would share a few of the less useful variety: Free Advice If someone asks another for advice, that’s a cool compliment. The person asking has seen something that compels them to seek information about how-another-does-or-sees-things....(read more)

    Read the article

  • Indexed view deadlocking

    - by Dave Ballantyne
    Deadlocks can be a really tricky thing to track down the root cause of.  There are lots of articles on the subject of tracking down deadlocks, but seldom do I find that in a production system that the cause is as straightforward.  That being said,  deadlocks are always caused by process A needs a resource that process B has locked and process B has a resource that process A needs.  There may be a longer chain of processes involved, but that is the basic premise. Here is one such (much simplified) scenario that was at first non-obvious to its cause: The system has two tables,  Products and Stock.  The Products table holds the description and prices of a product whilst Stock records the current stock level. USE tempdb GO CREATE TABLE Product ( ProductID INTEGER IDENTITY PRIMARY KEY, ProductName VARCHAR(255) NOT NULL, Price MONEY NOT NULL ) GO CREATE TABLE Stock ( ProductId INTEGER PRIMARY KEY, StockLevel INTEGER NOT NULL ) GO INSERT INTO Product SELECT TOP(1000) CAST(NEWID() AS VARCHAR(255)), ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM sys.columns a CROSS JOIN sys.columns b GO INSERT INTO Stock SELECT ProductID,ABS(CAST(CAST(NEWID() AS VARBINARY(255)) AS INTEGER))%100 FROM Product There is a single stored procedure of GetStock: Create Procedure GetStock as SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 Analysis of the system showed that this procedure was causing a performance overhead and as reads of this data was many times more than writes,  an indexed view was created to lower the overhead. CREATE VIEW vwActiveStock With schemabinding AS SELECT Product.ProductID,Product.ProductName FROM dbo.Product join dbo.Stock on Stock.ProductId = Product.ProductID where Stock.StockLevel <> 0 go CREATE UNIQUE CLUSTERED INDEX PKvwActiveStock on vwActiveStock(ProductID) This worked perfectly, performance was improved, the team name was cheered to the rafters and beers all round.  Then, after a while, something else happened… The system updating the data changed,  The update pattern of both the Stock update and the Product update used to be: BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT BEGIN TRAN UPDATE... COMMIT It changed to: BEGIN TRAN UPDATE... UPDATE... UPDATE... COMMIT Nothing that would raise an eyebrow in even the closest of code reviews.  But after this change we saw deadlocks occuring. You can reproduce this by opening two sessions. In session 1 begin transaction Update Product set ProductName ='Test' where ProductID = 998 Then in session 2 begin transaction Update Stock set Stocklevel = 5 where ProductID = 999 Update Stock set Stocklevel = 5 where ProductID = 998 Hop back to session 1 and.. Update Product set ProductName ='Test' where ProductID = 999 Looking at the deadlock graphs we could see the contention was between two processes, one updating stock and the other updating product, but we knew that all the processes do to the tables is update them.  Period.  There are separate processes that handle the update of stock and product and never the twain shall meet, no reason why one should be requiring data from the other.  Then it struck us,  AH the indexed view. Naturally, when you make an update to any table involved in a indexed view, the view has to be updated.  When this happens, the data in all the tables have to be read, so that explains our deadlocks.  The data from stock is read when you update product and vice-versa. The fix, once you understand the problem fully, is pretty simple, the apps did not guarantee the order in which data was updated.  Luckily it was a relatively simple fix to order the updates and deadlocks went away.  Note, that there is still a *slight* risk of a deadlock occurring, if both a stock update and product update occur at *exactly* the same time.

    Read the article

  • 24 Hours of PASS: 15 Powerful Dynamic Management Objects - Deck and Demos

    - by Adam Machanic
    Thank you to everyone who attended today's 24 Hours of PASS webcast on Dynamic Management Objects! I was shocked, awed, and somewhat scared when I saw the attendee number peak at over 800. I really appreciate your taking time out of your day to listen to me talk. It's always interesting presenting to people I can't see or hear, so I relied on Twitter for a form of nearly real-time feedback. I would like to especially thank everyone who left me tweets both during and after the presentation. Your feedback...(read more)

    Read the article

  • The updated Survey pattern for Power Pivot and Tabular #powerpivot #tabular #ssas #dax

    - by Marco Russo (SQLBI)
    One of the first models I created for the many-to-many revolution white paper was the Survey one. At the time, it was in Analysis Services Multidimensional, and then we implemented it in Analysis Services Tabular and in Power Pivot, using the DAX language. I recently reviewed the data model and published it in the Survey article on DAX Patterns site. The Survey pattern is the foundation for others, such as the Basket Analysis, and it is widely used in many different business scenario. I was particularly happy to know it has been using to perform data analysis for cancer research! In this article I did some maintenance on the DAX formulas, checking that the proper error handling is part of the formulas, and highlighting some differences in slicers behavior between Excel 2010 and Excel 2013, which could be particularly important for the Survey scenario. As usual, we provide sample workbooks for both Excel 2010 and Excel 2013, and we use DAX Formatter to make the DAX code easier to read. Any feedback will be appreciated!

    Read the article

  • Possible SWITCH Optimization in DAX – #powerpivot #dax #tabular

    - by Marco Russo (SQLBI)
    In one of the Advanced DAX Workshop I taught this year, I had an interesting discussion about how to optimize a SWITCH statement (which could be frequently used checking a slicer, like in the Parameter Table pattern). Let’s start with the problem. What happen when you have such a statement? Sales :=     SWITCH (         VALUES ( Period[Period] ),         "Current", [Internet Total Sales],         "MTD", [MTD Sales],         "QTD", [QTD Sales],         "YTD", [YTD Sales],          BLANK ()     ) The SWITCH statement is in reality just syntax sugar for a nested IF statement. When you place such a measure in a pivot table, for every cell of the pivot table the IF options are evaluated. In order to optimize performance, the DAX engine usually does not compute cell-by-cell, but tries to compute the values in bulk-mode. However, if a measure contains an IF statement, every cell might have a different execution path, so the current implementation might evaluate all the possible IF branches in bulk-mode, so that for every cell the result from one of the branches will be already available in a pre-calculated dataset. The price for that could be high. If you consider the previous Sales measure, the YTD Sales measure could be evaluated for all the cells where it’s not required, and also when YTD is not selected at all in a Pivot Table. The actual optimization made by the DAX engine could be different in every build, and I expect newer builds of Tabular and Power Pivot to be better than older ones. However, we still don’t live in an ideal world, so it could be better trying to help the engine finding a better execution plan. One student (Niek de Wit) proposed this approach: Selection := IF (     HASONEVALUE ( Period[Period] ),     VALUES ( Period[Period] ) ) Sales := CALCULATE (     [Internet Total Sales],     FILTER (         VALUES ( 'Internet Sales'[Order Quantity] ),         'Internet Sales'[Order Quantity]             = IF (                 [Selection] = "Current",                 'Internet Sales'[Order Quantity],                 -1             )     ) )     + CALCULATE (         [MTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "MTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [QTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "QTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     )     + CALCULATE (         [YTD Sales],         FILTER (             VALUES ( 'Internet Sales'[Order Quantity] ),             'Internet Sales'[Order Quantity]                 = IF (                     [Selection] = "YTD",                     'Internet Sales'[Order Quantity],                     -1                 )         )     ) At first sight, you might think it’s impossible that this approach could be faster. However, if you examine with the profiler what happens, there is a different story. Every original IF’s execution branch is now a separate CALCULATE statement, which applies a filter that does not execute the required measure calculation if the result of the FILTER is empty. I used the ‘Internet Sales’[Order Quantity] column in this example just because in Adventure Works it has only one value (every row has 1): in the real world, you should use a column that has a very low number of distinct values, or use a column that has always the same value for every row (so it will be compressed very well!). Because the value –1 is never used in this column, the IF comparison in the filter discharge all the values iterated in the filter if the selection does not match with the desired value. I hope to have time in the future to write a longer article about this optimization technique, but in the meantime I’ve seen this optimization has been useful in many other implementations. Please write your feedback if you find scenarios (in both Power Pivot and Tabular) where you obtain performance improvements using this technique!

    Read the article

  • Passiionate about Microsoft Technology - Help raise money for Cystic Fibrosis Foundation

    - by Testas
    I need your help! Please sign up to help our team raise $10,000 for Cystic Fibrosis Foundation. Simply by becoming a team member (a bit like a fan) and you will be helping our team earn points and advance in the race to rasie the money for charity.   If you can tick any of the boxes below then we need your help: Already Microsoft Certified? Hold a MCP/MCSA/MCSE/MCT/TS/MCITP? Want to help sufferers from the most common genetically inherited disease? Passionate about Microsoft Technology? Like to Blog, Tweet, email, connect! Enjoy the thrill of the race! Follow the Born To Learn Blog? Join our blue team and help us become the leader of the race.so please sign in with your Live ID which is associated with your MCP account and register with us - also take a look at the blue forums - we are building up some cool info! http://bit.ly/blueteam  or  http://borntolearn.mslearn.net/prix/p/index.aspx Please blog and let people know about this! Regards Chris

    Read the article

  • Hyper-V cluster VS regular cluster

    - by Sasha
    We need to choice between Hyper-V and regular cluster technologies. What is the advantage and disadvantage of these approaches? Update: We have to physical servers and want to build reliably solution using cluster approach. We need to clustering our application and DB (MS SQL). We know that we can use: Regular Windows Cluster Service. Application and DB will be migrating from one node to other. Hyper-V Failover Cluster. Virtual machine will be migrating from one node to other. Combined variant. DB mirroring for MS SQL and Hyper-V for our application. We need to make a choice between this approach. So we need to know advantage and disadvantage of these approaches?

    Read the article

  • Providing a public Web-Hosting service on MS Windows - advice and resources

    - by crgnz
    Are there any resources offering advice on how to setup a microsoft based web-hosting service? I currently offer LAMP hosting with cPanel, but there is some demand for IIS & SQL Server. As far as I can tell MS Windows Web Server 2008 R2 edition allows unlimited IIS connections. And a per-processor license for MS SQL Server Web Edition 2008 also permits unlimited connections. Where I am falling down is that I can't figure out how to get "unlimited" Active Directory users. I can't use 2008R2 Web Server edition for AD, so I will need the 2008R2 standard edition, I think. Does Microsoft have a provision for using AD in an ISP scenario? I am looking at using the cPanel Enkompas system to manage the Windows software, and Enkompas requires AD for user authentication. Any advice would be greatly appreciated!

    Read the article

  • Intel Server Strategy Shift with Sandy Bridge EN & EP

    - by jchang
    The arrival of the Sandy Bridge EN and EP processors, expected in early 2012, will mark the completion of a significant shift in Intel server strategy. For the longest time 1995-2009, the strategy had been to focus on producing a premium processor designed for 4-way systems that might also be used in 8-way systems and higher. The objective for 2-way systems was use the desktop processor that later had a separate brand and different package & socket to leverage the low cost structure in driving...(read more)

    Read the article

  • Dynamic Ranking with Excel and PowerPivot

    - by AlbertoFerrari
    Ranking is useful and, in our book , I and Marco provide a lot of information about how to perform ranking with PowerPivot. Nevertheless, there is an interesting scenario where ranking can be performed without complex DAX formulas, but with just some creative Excel usage. I would like to describe it here. Let us start with some words about the scenario: we want to rank products based on sales in a year (e.g. 2002) and see how the top 10 of these products performed in the following or preceding years....(read more)

    Read the article

  • MicroTraining: Executing SSIS 2012 Packages 22 May 10:00 AM EDT (Free!)

    - by andyleonard
    I am pleased to announce the latest (free!) Linchpin People microtraining event will be held Tuesday 22 May 2012 at 10:00 AM EDT. The topic will be Executing SSIS 2012 Packages. In this presentation, I will be demonstrating several ways to execute SSIS 2012 packages. Register here ! Interested in learning about more microtraining from Linchpin People – before anyone else? Sign up for our newsletter ! :{>...(read more)

    Read the article

  • Presenting at PASS Summit 2011!

    - by andyleonard
    Introduction I am honored to be presenting at the PASS Summit 2011 11-14 Oct 2011 in Seattle! This year, I was selected to present a regular session and a pre-conference session. The pre-con is going to be fun. It’s a team effort with Tim Mitchell ( Blog | @Tim_Mitchell | SQLPeople ) and – even though he isn’t listed as a presenter – Matt Masson ( Blog | @mattmasson ). Like me, Tim’s been using SSIS since it was released; and Matt’s on the SSIS developer team at Microsoft – he helps build SSIS! Our...(read more)

    Read the article

  • Windows 8 SDK and Orca

    - by John Paul Cook
    The Windows 8 SDK has a new version of Orca for those of us who edit msi files. The download is for a small executable, sdksetup.exe which causes the following dialog box to appear. If you only want Orca and you don’t want to install the SDK, override the default and download all of the files to the location of your choice. In this example, the files are downloaded to D:\Media\Windows8\SDK Figure 1. Downloading the Windows 8 SDK to D:\Media\Windows8\SDK instead of installing it. Click the D ownload...(read more)

    Read the article

  • Oracle Linked Servers on Windows Server 2008 R2

    - by John Paul Cook
    Oracle hasn’t yet released versions of its client software for Windows Server 2008 R2. If you need to create an Oracle linked server, that’s a problem. You’ll see this installation block when attempting to install the Oracle client software for Windows Server 2008: It’s very simple to fix. Check the first checkbox to make the installer ignore the version check. Click Next and ignore the warning you’ll see. The installation should complete successfully. Windows does offer various strategies for mitigating...(read more)

    Read the article

  • Extracting GPS Data from JPG files

    - by Peter W. DeBetta
    I have been very remiss in posting lately. Unfortunately, much of what I do now involves client work that I cannot post. Fortunately, someone asked me how he could get a formatted list (e.g. tab-delimited) of files with GPS data from those files. He also added the constraint that this could not be a new piece of software (company security) and had to be scriptable. I did some searching around, and found some techniques for extracting GPS data, but was unable to find a complete solution. So, I did...(read more)

    Read the article

  • Virtual Lab part 2&ndash;Templates, Patterns, Baselines

    - by Geoff N. Hiten
    Once you have a good virtualization platform chosen, whether it is a desktop, server or laptop environment, the temptation is to build “X”.  “X” may be a SharePoint lab, a Virtual Cluster, an AD test environment or some other cool project that you really need RIGHT NOW.  That would be doing it wrong. My grandfather taught woodworking and cabinetmaking for twenty-seven years at a trade school in Alabama.  He was the first instructor hired at that school and the only teacher for the first two years.  His students built tables, chairs, and workbenches so the school could start its HVAC courses.   Visiting as a child, I also noticed many extra “helper” stands, benches, holders, and gadgets all built from wood.  What does that have to do with a virtual lab, you ask?  Well, that is the same approach you should take.  Build stuff that you will use.  Not for solving a particular problem, but to let the Virtual Lab be part of your normal troubleshooting toolkit. Start with basic copies of various Operating Systems.  Load and patch server and desktop OS environments.  This also helps build your collection of ISO files, another essential element of a virtual Lab.  Once you have these “baseline” images, you can use your Virtualization software’s snapshot capability to freeze the image.  Clone the snapshot and you have a brand new fully patched machine in mere moments.  You may have to sysprep some of the Microsoft OS environments if you are going to create a domain environment or experiment with clustering.  That is still much faster than loading and patching from scratch. So once you have a stock of raw materials (baseline images in this case) where should you start.  Again, my grandfather’s workshop gives us the answer.  In the shop it was workbenches and tables to hold large workpieces that made the equipment more useful.  In a Windows environment the same role falls to the fundamental network services:  DHCP, DNS, Active Directory, Routing, File Services, and Storage services.  Plan your internal network setup.  Build out an AD controller with all the features listed.  Make the actual domain an isolated domain so it will not care about where you take it.  Add the Microsoft iSCSI target.  Once you have this single system, you can leverage it for almost any network environment beyond a simple stand-alone system. Having these templates and fundamental infrastructure elements ready to run means I can build a quick lab in minutes instead of hours.  My solutions are well-tested, my processes fully documented with screenshots, and my plans validated well before I have to make any changes to client systems.  the work I put in is easily returned in increased value and client satisfaction.

    Read the article

  • Upgrade SSIS 2005 Packages to SSIS 2008

    There are several enhancements in SSIS 2008 such as enhanced lookup transformation, the development environment for Script Task and Script Component changing from VSA to VSTA, etc. If you intend to upgrade your SSIS 2005 packages to SSIS 2008 ... [Read Full Article]

    Read the article

< Previous Page | 904 905 906 907 908 909 910 911 912 913 914 915  | Next Page >