Search Results

Search found 47861 results on 1915 pages for 'sql update'.

Page 431/1915 | < Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >

  • PowerShell PowerPack Download

    - by BuckWoody
    I read Jeffery Hicks’ article in this month’s Redmond Magazine on a new add-in for Windows PowerShell 2.0. It’s called the PowerShell Pack and it has a some great new features that I plan to put into place on my production systems as soon as I finished learning and testing them. You can download the pack here if you have PowerShell 2.0. I’m having a lot of fun with it, and I’ll blog about what I’m learning here in the near future, but you should check it out. The only issue I have with it right now is that you have to load a module and then use get-help to find out what it does, because I haven’t found a lot of other documentation so far. The most interesting modules for me are the ones that can run a command elevated (in PSUserTools), the task scheduling commands (in TaskScheduler) and the file system checks and tools (in FileSystem). There’s also a way to create simple Graphical User Interface panels (in ). I plan to string all these together to install a management set of tools on my SQL Server Express Instances, giving the user “task buttons” to backup or restore a database, add or delete users and so on. Yes, I’ll be careful, and yes, I’ll make sure the user is allowed to do that. For now, I’m testing the download, but I thought I would share what I’m up to. If you have PowerShell 2.0 and you download the pack, let me know how you use it. Script Disclaimer, for people who need to be told this sort of thing: Never trust any script, including those that you find here, until you understand exactly what it does and how it will act on your systems. Always check the script on a test system or Virtual Machine, not a production system. Yes, there are always multiple ways to do things, and this script may not work in every situation, for everything. It’s just a script, people. All scripts on this site are performed by a professional stunt driver on a closed course. Your mileage may vary. Void where prohibited. Offer good for a limited time only. Keep out of reach of small children. Do not operate heavy machinery while using this script. If you experience blurry vision, indigestion or diarrhea during the operation of this script, see a physician immediately. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Have you really fixed that problem?

    - by DavidWimbush
    The day before yesterday I saw our main live server's CPU go up to constantly 100% with just the occasional short drop to a lower level. The exact opposite of what you'd want to see. We're log shipping every 15 minutes and part of that involves calling WinRAR to compress the log backups before copying them over. (We're on SQL2005 so there's no native compression and we have bandwidth issues with the connection to our remote site.) I realised the log shipping jobs were taking about 10 minutes and that most of that was spent shipping a 'live' reporting database that is completely rebuilt every 20 minutes. (I'm just trying to keep this stuff alive until I can improve it.) We can rebuild this database in minutes if we have to fail over so I disabled log shipping of that database. The log shipping went down to less than 2 minutes and I went off to the SQL Social evening in London feeling quite pleased with myself. It was a great evening - fun, educational and thought-provoking. Thanks to Simon Sabin & co for laying that on, and thanks too to the guests for making the effort when they must have been pretty worn out after doing DevWeek all day first. The next morning I came down to earth with a bump: CPU still at 100%. WTF? I looked in the activity monitor but it was confusing because some sessions have been running for a long time so it's not a good guide what's using the CPU now. I tried the standard reports showing queries by CPU (average and total) but they only show the top 10 so they just show my big overnight archiving and data cleaning stuff. But the Profiler showed it was four queries used by our new website usage tracking system. Four simple indexes later the CPU was back where it should be: about 20% with occasional short spikes. So the moral is: even when you're convinced you've found the cause and fixed the problem, you HAVE to go back and confirm that the problem has gone. And, yes, I have checked the CPU again today and it's still looking sweet.

    Read the article

  • July, the 31 Days of SQL Server DMO’s – Day 25 (sys.dm_db_missing_index_details)

    - by Tamarick Hill
    The sys.dm_db_missing_index_details Dynamic Management View is used to return information about missing indexes on your SQL Server instances. These indexes are ones that the optimizer has identified as indexes it would like to use but did not have. You may also see these same indexes indicated in other tools such as query execution plans or the Database tuning advisor. Let’s execute this DMV so we can review the information it provides us. I do not have any missing index information for my AdventureWorks2012 database, but for the purposes of illustrating the result set of this DMV, I will present the results from my msdb database. SELECT * FROM sys.dm_db_missing_index_details The first column presented is the index_handle which uniquely identifies a particular missing index. The next two columns represent the database_id and the object_id for the particular table in question. Next is the ‘equality_columns’ column which gives you a list of columns (comma separated) that would be beneficial to the optimizer for equality operations. By equality operation I mean for any queries that would use a filter or join condition such as WHERE A = B. The next column, ‘inequality_columns’, gives you a comma separated list of columns that would be beneficial to the optimizer for inequality operations. An inequality operation is anything other than A = B. For example, “WHERE A != B”, “WHERE A > B”, “WHERE A < B”, and “WHERE A <> B” would all qualify as inequality. Next is the ‘included_columns’ column which list all columns that would be beneficial to the optimizer for purposes of providing a covering index and preventing key/bookmark lookups. Lastly is the ‘statement’ column which lists the name of the table where the index is missing. This DMV can help you identify potential indexes that could be added to improve the performance of your system. However, I will advise you not to just take the output of this DMV and create an index for everything you see. Everything listed here should be analyzed and then tested on a Development or Test system before implementing into a Production environment. For more information on this DMV, please see the below Books Online link: http://msdn.microsoft.com/en-us/library/ms345434.aspx Follow me on Twitter @PrimeTimeDBA

    Read the article

  • Application Performance: The Best of the Web

    - by Michaela Murray
    Wisdom A deep understanding and realization […] resulting in the ability to apply perceptions, judgements and actions. It is also the comprehension of what is true coupled with optimum judgment as to action. - Wikipedia We’re writing a book for ASP.NET developers, and we want you to be a part of it. We know that there’s a huge amount of web developer wisdom that never gets shared, and we want to find those golden nuggets of knowledge and experience, and make sure everyone can learn from them. Right now, we want to find out about your top tips, hard-won lessons, and sage advice for avoiding, finding, and fixing application performance problems. If you work with .NET and SQL, even better – a lot of application performance relies on the interaction with the database, so we want to hear from you! “How Do You Want Me To Be Involved?” Right! Details! We want you, our most excellent readers, to email us with the Best Advice you would give to other developers for getting the best performance out of their applications. It doesn’t matter if your advice is for newbies or veterans, .NET or SQL – so long as it’s about application performance, we want to hear from you. (And if you think that there’s developer wisdom out there that “everyone knows”, a) I’m willing to bet you could find someone who doesn’t know about it, and b) it probably bears repeating anyway!) “I’m Interested. What Can You Do For Me?” Excellent question. For starters, there’s a chance to win a Microsoft Surface (the tablet, not the table-top). Once all the ASP.NET Wisdom has been collected, tallied, and labelled, it will then be weighed and measured by a team of expert judges (whose identities are still a closely-guarded secret).  The top tip in both SQL & .NET categories will each win their author their very own MS Surface. But that’s not all! We can also give you… immortality! More details? Ok. We’ll be collecting all of the tips sent in by our readers (and we can’t wait to learn from you all,) and with the help of our Simple-Talk editors, we will publish and distribute your combined and documented knowledge as a free, community-created, professionally typeset eBook. You will naturally be credited by name / pseudonym / twitter handle / GitHub username / StackOverflow profile / Whatever, as the clearly ingenious author of hot performance tips. The Not-Very-Fine Print Here’s the breakdown: We want to bring together the best application performance knowledge from ASP.NET developers. Closing date for submissions will be 9am GMT, December 4th. Submissions should be made by email – [email protected] Submissions will be judged by a panel of expert judges (who will be revealed soon). The top submission in both the SQL & .NET categories will each win a Microsoft Surface. ALL the tips which make it through the judging process will be polished by Simple-Talk editors, and turned into a professionally typeset eBook, which will be freely available, and promoted alongside the ANTS Performance Profiler tool. Anyone whose entry makes it into the book will be clearly and profusely credited in the method of their choice (or can remain anonymous.) The really REALLY short version Share what you know about ASP.NET application performance for a chance to win a Microsoft Surface, and then get your name credited in a slick eBook with top-notch production values. For more details, see above. We can’t wait to learn from you!

    Read the article

  • Docky have stopped working since update

    - by Fraekkert
    I'm running Ubuntu 10.10 64-bit. I'm using the Docky ppa, and since the latest update It won't start. If i run it from the terminal, this is what i get: [Info 09:21:19.005] Docky version: 2.1.0 bzr docky r1761 ppa [Info 09:21:19.024] Kernel version: 2.6.35.24 [Info 09:21:19.026] CLR version: 2.0.50727.1433 [Debug 09:21:19.493] [UserArgs] BufferTime = 0 [Debug 09:21:19.494] [UserArgs] MaxSize = 2147483647 [Debug 09:21:19.494] [UserArgs] NetbookMode = False [Debug 09:21:19.494] [UserArgs] NoPollCursor = False [Debug 09:21:19.528] [SystemService] Using org.freedesktop.UPower for battery information [Info 09:21:19.564] [ThemeService] Setting theme: Transparent [Debug 09:21:19.587] [DesktopItemService] Loading remap file '/usr/share/docky/remaps.ini'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'Picasa3.exe' to 'picasa'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'nbexec' to 'netbeans'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'deja-dup-preferences' to 'deja-dup'. [Debug 09:21:19.599] [DesktopItemService] Remapping 'VirtualBox' to 'virtualbox'. [Warn 09:21:19.600] [DesktopItemService] Could not find remap file '/home/lasse/.local/share/docky/remaps.ini'! [Debug 09:21:19.602] [DesktopItemService] Loading desktop item cache '/home/lasse/.cache/docky/docky.desktop.en_DK.utf8.cache'. [Info 09:21:20.101] [DockServices] Dock services initialized. [Debug 09:21:20.134] [DBusManager] DBus Registered: org.gnome.Docky [Debug 09:21:20.142] [DBusManager] DBus Registered: net.launchpad.DockManager Stacktrace: at (wrapper managed-to-native) System.IO.MonoIO.Read (intptr,byte[],int,int,System.IO.MonoIOError&) <IL 0x00012, 0x00062> at (wrapper managed-to-native) System.IO.MonoIO.Read (intptr,byte[],int,int,System.IO.MonoIOError&) <IL 0x00012, 0x00062> at System.IO.FileStream.ReadData (intptr,byte[],int,int) <IL 0x00009, 0x00047> at System.IO.FileStream.RefillBuffer () <IL 0x0001c, 0x0002b> at System.IO.FileStream.ReadByte () <IL 0x00079, 0x000c7> at Mono.Addins.Serialization.BinaryXmlReader.ReadNext () <IL 0x0000b, 0x00031> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x0003c, 0x00053> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x00047, 0x0005f> at Mono.Addins.Serialization.BinaryXmlReader.Skip () <IL 0x00047, 0x0005f> And this .Skip () continues infinitely, and very fast. I've tried cleaning the cache and reinstalling docky, but without luck.

    Read the article

  • Failing report subscriptions

    - by DavidWimbush
    We had an interesting problem while I was on holiday. (Why doesn't this stuff ever happen when I'm there?) The sysadmin upgraded our Exchange server to Exchange 2010 and everone's subscriptions stopped. My Subscriptions showed an error message saying that the email address of one of the recipients is invalid. When you create a subscription, Reporting puts your Windows user name into the To field and most users have no permissions to edit it. By default, Reporting leaves it up to exchange to resolve that into an email address. This only works if Exchange is set up to translate aliases or 'short names' into email addresses. It turns out this leaves Exchange open to being used as a relay so it is disabled out of the box. You now have three options: Open up Exchange. That would be bad. Give all Reporting users the ability to edit the To field in a subscription. a) They shouldn't have to, it should just work. b) They don't really have any business subscribing anyone but themselves. Fix the report server to add the domain. This looks like the right choice and it works for us. See below for details. Pre-requisites: A single email domain name. A clear relationship between the Windows user name and the email address. eg. If the user name is joebloggs, then joebloggs@domainname needs to be the email address or an alias of it. Warning: Saving changes to the rsreportserver.config file will restart the Report Server service which effectively takes Reporting down for around 30 seconds. Time your action accordingly. Edit the file rsreportserver.config (most probably in the folder ..\Program Files[ (x86)]\Microsoft SQL Server\MSRS10_50[.instancename]\Reporting Services\ReportServer). There's a setting called DefaultHostName which is empty by default. Enter your email domain name without the leading '@'. Save the file. This domain name will be appended to any destination addresses that don't have a domain name of their own.

    Read the article

  • SQL Server Date Comparison Functions

    - by HighAltitudeCoder
    A few months ago, I found myself working with a repetitive cursor that looped until the data had been manipulated enough times that it was finally correct.  The cursor was heavily dependent upon dates, every time requiring the earlier of two (or several) dates in one stored procedure, while requiring the later of two dates in another stored procedure. In short what I needed was a function that would allow me to perform the following evaluation: WHERE MAX(Date1, Date2) < @SomeDate The problem is, the MAX() function in SQL Server does not perform this functionality.  So, I set out to put these functions together.  They are titled: EarlierOf() and LaterOf(). /**********************************************************                               EarlierOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.EarlierOf('1/1/2000','12/1/2009') SELECT dbo.EarlierOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.EarlierOf('11/15/2000',NULL) SELECT dbo.EarlierOf(NULL,'1/15/2004') SELECT dbo.EarlierOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'EarlierOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION EarlierOf END GO   CREATE FUNCTION EarlierOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 < @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON EarlierOf TO UserRole1 --GRANT SELECT ON EarlierOf TO UserRole2 --GO                                                                                             The inverse of this function is only slightly different. /**********************************************************                               LaterOf.sql   **********************************************************/ /**********************************************************   Return the later of two DATETIME variables.   Parameter 1: DATETIME1 Parameter 2: DATETIME2   Works for a variety of DATETIME or NULL values. Even though comparisons with NULL are actually indeterminate, we know conceptually that NULL is not earlier or later than any other date provided.   SYNTAX: SELECT dbo.LaterOf('1/1/2000','12/1/2009') SELECT dbo.LaterOf('2009-12-01 00:00:00.000','2009-12-01 00:00:00.521') SELECT dbo.LaterOf('11/15/2000',NULL) SELECT dbo.LaterOf(NULL,'1/15/2004') SELECT dbo.LaterOf(NULL,NULL)   **********************************************************/ USE AdventureWorks GO   IF EXISTS       (SELECT *       FROM sysobjects       WHERE name = 'LaterOf'       AND xtype = 'FN'       ) BEGIN             DROP FUNCTION LaterOf END GO   CREATE FUNCTION LaterOf (       @Date1                              DATETIME,       @Date2                              DATETIME )   RETURNS DATETIME   AS BEGIN       DECLARE @ReturnDate     DATETIME         IF (@Date1 IS NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = NULL             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NULL AND @Date2 IS NOT NULL)       BEGIN             SET @ReturnDate = @Date2             GOTO EndOfFunction       END         ELSE IF (@Date1 IS NOT NULL AND @Date2 IS NULL)       BEGIN             SET @ReturnDate = @Date1             GOTO EndOfFunction       END         ELSE       BEGIN             SET @ReturnDate = @Date1             IF @Date2 > @Date1                   SET @ReturnDate = @Date2             GOTO EndOfFunction       END         EndOfFunction:       RETURN @ReturnDate   END -- End Function GO   ---- Set Permissions --GRANT SELECT ON LaterOf TO UserRole1 --GRANT SELECT ON LaterOf TO UserRole2 --GO                                                                                             The interesting thing about this function is its simplicity and the built-in NULL handling functionality.  Its interesting, because it seems like something should already exist in SQL Server that does this.  From a different vantage point, if you create this functionality and it is easy to use (ideally, intuitively self-explanatory), you have made a successful contribution. Interesting is good.  Self-explanatory, or intuitive is FAR better.  Happy coding! Graeme

    Read the article

  • Good DBAs Do Baselines

    - by Louis Davidson
    One morning, you wake up and feel funny. You can’t quite put your finger on it, but something isn’t quite right. What now? Unless you happen to be a hypochondriac, you likely drag yourself out of bed, get on with the day and gather more “evidence”. You check your symptoms over the next few days; do you feel the same, better, worse? If better, then great, it was some temporal issue, perhaps caused by an allergic reaction to some suspiciously spicy chicken. If the same or worse then you go to the doctor for some health advice, but armed with some data to share, and having ruled out certain possible causes that are fixed with a bit of rest and perhaps an antacid. Whether you realize it or not, in comparing how you feel one day to the next, you have taken baseline measurements. In much the same way, a DBA uses baselines to gauge the gauge health of their database servers. Of course, while SQL Server is very willing to share data regarding its health and activities, it has almost no idea of the difference between good and bad. Over time, experienced DBAs develop “mental” baselines with which they can gauge the health of their servers almost as easily as their own body. They accumulate knowledge of the daily, natural state of each part of their database system, and so know instinctively when one of their databases “feels funny”. Equally, they know when an “issue” is just a passing tremor. They see their SQL Server with all of its four CPU cores running close 100% and don’t panic anymore. Why? It’s 5PM and every day the same thing occurs when the end-of-day reports, which are very CPU intensive, are running. Equally, they know when they need to respond in earnest when it is the first time they have heard about an issue, even if it has been happening every day. Nevertheless, no DBA can retain mental baselines for every characteristic of their systems, so we need to collect physical baselines too. In my experience, surprisingly few DBAs do this very well. Part of the problem is that SQL Server provides a lot of instrumentation. If you look, you will find an almost overwhelming amount of data regarding user activity on your SQL Server instances, and use and abuse of the available CPU, I/O and memory. It seems like a huge task even to work out which data you need to collect, let alone start collecting it on a regular basis, managing its storage over time, and performing detailed comparative analysis. However, without baselines, though, it is very difficult to pinpoint what ails a server, just by looking at a single snapshot of the data, or to spot retrospectively what caused the problem by examining aggregated data for the server, collected over many months. It isn’t as hard as you think to get started. You’ve probably already established some troubleshooting queries of the type SELECT Value FROM SomeSystemTableOrView. Capturing a set of baseline values for such a query can be as easy as changing it as follows: INSERT into BaseLine.SomeSystemTable (value, captureTime) SELECT Value, SYSDATETIME() FROM SomeSystemTableOrView; Of course, there are monitoring tools that will collect and manage this baseline data for you, automatically, and allow you to perform comparison of metrics over different periods. However, to get yourself started and to prove to yourself (or perhaps the person who writes the checks for tools) the value of baselines, stick something similar to the above query into an agent job, running every hour or so, and you are on your way with no excuses! Then, the next time you investigate a slow server, and see x open transactions, y users logged in, and z rows added per hour in the Orders table, compare to your baselines and see immediately what, if anything, has changed!

    Read the article

  • How to update off screen bitmap in a surfaceview thread

    - by DKDiveDude
    I have a Surfaceview thread and an off canvas texture bitmap that is being generated (changed), first row (line), every frame and then copied one position (line) down on regular surfaceview bitmap to make a scrolling effect, and I then continue to draw other things on top of that. Well that is what I really want, however I can't get it to work even though I am creating a separate canvas for off screen bitmap. It is just not scrolling at all. I other words I have a memory bitmap, same size as Surfaceview canvas, which I need to scroll (shift) down one line every frame, and then replace top line with new random texture, and then draw that on regular Surfaceview canvas. Here is what I thought would work; My surfaceChanged where I specify bitmap and canvasses and start thread: @Override public void surfaceCreated(SurfaceHolder holder) { intSurfaceWidth = mSurfaceView.getWidth(); intSurfaceHeight = mSurfaceView.getHeight(); memBitmap = Bitmap.createBitmap(intSurfaceWidth, intSurfaceHeight, Bitmap.Config.ARGB_8888); memCanvas = new Canvas(memCanvas); myThread = new MyThread(holder, this); myThread.setRunning(true); blnPause = false; myThread.start(); } My thread, only showing essential middle running part: @Override public void run() { while (running) { c = null; try { // Lock canvas for drawing c = myHolder.lockCanvas(null); synchronized (mSurfaceHolder) { // First draw off screen bitmap to off screen canvas one line down memCanvas.drawBitmap(memBitmap, 0, 1, null); // Create random one line(row) texture bitmap memTexture = Bitmap.createBitmap(imgTexture, 0, rnd.nextInt(intTextureImageHeight), intSurfaceWidth, 1); // Now add this texture bitmap to top of off screen canvas and hopefully bitmap memCanvas.drawBitmap(textureBitmap, intSurfaceWidth, 0, null); // Draw above updated off screen bitmap to regular canvas, at least I thought it would update (save changes) shifting down and add the texture line to off screen bitmap the off screen canvas was pointing to. c.drawBitmap(memBitmap, 0, 0, null); // Other drawing to canvas comes here } finally { // do this in a finally so that if an exception is thrown // during the above, we don't leave the Surface in an // inconsistent state if (c != null) { myHolder.unlockCanvasAndPost(c); } } } } For my game Tunnel Run. Right now I have a working solution where I instead have an array of bitmaps, size of surface height, that I populate with my random texture and then shift down in a loop for each frame. I get 50 frames per second, but I think I can do better by instead scrolling bitmap.

    Read the article

  • Pay in the future should make you think in the present

    - by BuckWoody
    Distributed Computing - and more importantly “-as-a-Service” models of computing have a different cost model. This is something that sounds obvious on the surface but it’s often forgotten during the design and coding phase of a project. In on-premises computing, we’re used to purchasing a server and all of the hardware infrastructure and software licenses needed not only for one project, but several. This is an up-front or “sunk” cost that we consume by running code the organization needs to perform its function. Using a direct connection over wires you’ve already paid for, we don’t often have to think about bandwidth, hits on the data store or the amount of compute we use - we just know more is better. In a pay-as-you-go model, however, each of these architecture decisions has a potential cost impact. The amount of data you store, the number of times you access it, and the amount you send back all come with a charge. The offset is that you don’t buy anything at all up-front, so that sunk cost is freed up. And financial professionals know that money now is worth more than money later. Saving that up-front cost allows you to invest it in other things. It’s not just that you’re using things that now cost money - it’s that the design itself in distributed computing has a cost impact. That can be a really good thing, such as when you dynamically add capacity for paying customers. If you can tie back the cost of a series of clicks to what a user will pay to do so, you can set a profit margin that is easy to track. Here’s a case in point: Assume you are using a large instance in Windows Azure to compute some data that you retrieve from a SQL Azure database. If you don’t monitor the path of the application, you may not know what you are really using. Since you’re paying by the size of the instance, it’s best to maximize it all the time. Recently I evaluated just this situation, and found that downsizing the instance and adding another one where needed, adding a caching function to the application, moving part of the data into Windows Azure tables not only increased the speed of the application, but reduced the cost and more closely tied the cost to the profit. The key is this: from the very outset - the design - make sure you include metrics to measure for the cost/performance (sometimes these are the same) for your application. Windows Azure opens up awesome new ways of doing things, so make sure you study distributed systems architecture before you try and force in the application design you have on premises into your new application structure.

    Read the article

  • Java EE 8 update

    - by delabassee
    Planning for Java EE 8 is now well underway. As you know, a few weeks ago, we conducted a three part Java EE 8 Community Survey (you can find the final summary here). The data gathered have been very influential for the next steps. You can now expect over the coming weeks and months to see updates on the various specifications that compose the Java EE platform. Some Specification Leads are busy gathering additional feedback regarding what they should focus their efforts on (e.g. CDI 2 survey). Other Specification Leads have already publicly exposed what they think should be one of the focus for the evolution of the specification they lead.  For example, adding Server-Senet Events (SSE) support in JAX-RS is being discussed here and adding MVC support is being discussed here. Please remember that the fact we are now discussing any feature does not insure that it will be included in the proposal, nor in any particular update to Java EE. We can expect additional enhancements, changes and evolutions as we get closer to the finalisation of the different specifications... and there is still a long way to go with these specification proposals! Linda DeMichiel, Java EE Co-Specification Lead, has recently posted a draft proposal for the Java EE 8 Platform specification. Linda's goal is to recruit people and companies supporting this proposal before submitting it to the JCP.  This draft proposal is very interesting reading as it contains relevant information on the plans for Java EE 8 such as : The themes: Support for the latest web standards (eg. HTTP 2.0)  Continue to work on ease of development Improve the infrastructure for cloud support Alignment with Java SE 8 New JSRs to be added to the platform: J-Cache Java API for JSON Binding Java Configuration Plans for the Web Profile Plans on technologies to prune in Java EE 8, ... So if you haven't done it yet, I really encourage you to read the Java EE 8 draft proposal! Our goal for the Java EE 8 specification is for it to be finalized in the second half of 2016. It is important to note that we are in the early days of Java EE 8 and at this stage everything (themes, content, timing, etc.) is preliminary. Everything still needs to be discussed, challenged and agreed within the different Java Community Process (JCP) Experts Groups (EGs). Some EGs that still need to be formed! It could also means that the roadmap will have to be adjusted to follow the progress being made in the different EGs. This is also a good occasion to remind you that participation within those upcoming JCP Experts Groups is encouraged. Contributing in an EG is an effective lever to influence what Java EE 8 will become! Finally, as things get more concrete, we will share details on how to engage in the different Java EE 8 related Adopt-a-JSR initiatives, another way to contribute. You can also read other posts related to Java EE 8, here at The Aquarium blog. Just look for articles with the 'javaee8' tag.

    Read the article

  • Advanced Record-Level Business Intelligence with Inner Queries

    - by gt0084e1
    While business intelligence is generally applied at an aggregate level to large data sets, it's often useful to provide a more streamlined insight into an individual records or to be able to sort and rank them. For instance, a salesperson looking at a specific customer could benefit from basic stats on that account. A marketer trying to define an ideal customer could pull the top entries and look for insights or patterns. Inner queries let you do sophisticated analysis without the overhead of traditional BI or OLAP technologies like Analysis Services. Example - Order History Constancy Let's assume that management has realized that the best thing for our business is to have customers ordering every month. We'll need to identify and rank customers based on how consistently they buy and when their last purchase was so sales & marketing can respond accordingly. Our current application may not be able to provide this and adding an OLAP server like SSAS may be overkill for our needs. Luckily, SQL Server provides the ability to do relatively sophisticated analytics via inner queries. Here's the kind of output we'd like to see. Creating the Queries Before you create a view, you need to create the SQL query that does the calculations. Here we are calculating the total number of orders as well as the number of months since the last order. These fields might be very useful to sort by but may not be available in the app. This approach provides a very streamlined and high performance method of delivering actionable information without radically changing the application. It's also works very well with self-service reporting tools like Izenda. SELECT CustomerID,CompanyName, ( SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID ) As Orders, DATEDIFF(mm, ( SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) ,getdate() ) AS MonthsSinceLastOrder FROM Customers Creating Views To turn this or any query into a view, just put CREATE VIEW AS before it. If you want to change it use the statement ALTER VIEW AS. Creating Computed Columns If you'd prefer not to create a view, inner queries can also be applied by using computed columns. Place you SQL in the (Formula) field of the Computed Column Specification or check out this article here. Advanced Scoring and Ranking One of the best uses for this approach is to score leads based on multiple fields. For instance, you may be in a business where customers that don't order every month require more persistent follow up. You could devise a simple formula that shows the continuity of an account. If they ordered every month since their first order, they would be at 100 indicating that they have been ordering 100% of the time. Here's the query that would calculate that. It uses a few SQL tricks to make this happen. We are extracting the count of unique months and then dividing by the months since initial order. This query will give you the following information which can be used to help sales and marketing now where to focus. You could sort by this percentage to know where to start calling or to find patterns describing your best customers. Number of orders First Order Date Last Order Date Percentage of months order was placed since last order. SELECT CustomerID, (SELECT COUNT(OrderID) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) As Orders, (SELECT Max(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS LastOrder, (SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) AS FirstOrder, DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) AS MonthsSinceFirstOrder, 100*(SELECT COUNT(DISTINCT 100*DATEPART(yy,OrderDate) + DATEPART(mm,OrderDate)) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID) / DATEDIFF(mm,(SELECT Min(OrderDate) FROM Orders WHERE Orders.CustomerID = Customers.CustomerID),getdate()) As OrderPercent FROM Customers

    Read the article

  • When a problem is resolved

    - by Rob Farley
    This month’s T-SQL Tuesday is hosted by Jen McCown, and she’s picked the topic of Resolutions. It’s a new year, and she’s thinking about what people have resolved to do this year. Unfortunately, I’ve never really done resolutions like that. I see too many people resolve to quit smoking, or lose weight, or whatever, and fail miserably. I’m not saying I don’t set goals, but it’s not a thing for New Year. The obvious joke is “1920x1080” as a resolution, but I’m not going there. I think Resolving is a strange word. It makes it sound like I’m having to solve a problem a second time, when actually, it’s more along the lines of solving a problem well enough for it to count as finished. If something has been resolved, a solution has been provided. There is a resolution, through the provision of a solution. It’s a strangeness of English. When I look up the word resolution at dictionary.com, it has 12 options, including “settling of a problem”. There’s a finality about resolution. If you resolve to do something, you’re saying “Yes. This is a done thing. I’m resolving to do it, which means that it may as well be complete already.” I like to think I resolve problems, rather than just solving them. I want my solving to be final and complete. If I tune a query, I don’t want to find that I’m back in there, re-tuning it at some point. Strangely, if I re-solve a problem, that implies that I didn’t resolve it in the first place. I only solved it. Temporarily. We “data-folk” live in a world where the most common answer is “It depends.” Frustratingly, the thing an answer depends on may still be changing in the system in question. That probably means that any solution that is put in place may need reinvestigating at some point later. So do I resolve things? Yes. Am I Chuck Norris, and solve things so well the world would break first? No. Do these two claims happily sit beside each other? No, unfortunately not. But I happily take responsibility for things, and let my clients depend on me to see it through. As far as they are concerned, it is resolved. And so I resolve to keep resolving, right through 2011.

    Read the article

  • Sampling SQL server batch activity

    - by extended_events
    Recently I was troubleshooting a performance issue on an internal tracking workload and needed to collect some very low level events over a period of 3-4 hours.  During analysis of the data I found that a common pattern I was using was to find a batch with a duration that was longer than average and follow all the events it produced.  This pattern got me thinking that I was discarding a substantial amount of event data that had been collected, and that it would be great to be able to reduce the collection overhead on the server if I could still get all activity from some batches. In the past I’ve used a sampling technique based on the counter predicate to build a baseline of overall activity (see Mikes post here).  This isn’t exactly what I want though as there would certainly be events from a particular batch that wouldn’t pass the predicate.  What I need is a way to identify streams of work and select say one in ten of them to watch, and sql server provides just such a mechanism: session_id.  Session_id is a server assigned integer that is bound to a connection at login and lasts until logout.  So by combining the session_id predicate source and the divides_by_uint64 predicate comparator we can limit collection, and still get all the events in batches for investigation. CREATE EVENT SESSION session_10_percent ON SERVER ADD EVENT sqlserver.sql_statement_starting(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlos.wait_info_external (        WHERE (package0.divides_by_uint64(sqlserver.session_id,10))), ADD EVENT sqlserver.sql_statement_completed(     WHERE (package0.divides_by_uint64(sqlserver.session_id,10))) ADD TARGET ring_buffer WITH (MAX_DISPATCH_LATENCY=30 SECONDS,TRACK_CAUSALITY=ON) GO   There we go; event collection is reduced while still providing enough information to find the root of the problem.  By the way the performance issue turned out to be an IO issue, and the session definition above was more than enough to show long waits on PAGEIOLATCH*.        

    Read the article

  • password incorrect 3 times + suspected failed update

    - by Cheese
    I have been lurking your site for the past few hours, and have found myself in a bit of a pickle. Visiting my parents, I discover that neither computer, nor laptop work. Long story short, I've got the laptop working, but have completely fudged up the computer. I am a n00b, but I was at least willing to give it a go. The comp originally had ubuntu 11.10 installed, later updated to 12.04. We have cds for both. I do not understand what the initial problem was for my parents, but somehow when I turned on the computer, it worked for me. Soon after, I was nagged to install the latest updates. So, I spent the next half an hour wondering why the updates kept on asking for 11.04 cdroms, until I realised that you could turn off the cdrom necessity. After doing this via console, I installed some of the smaller updates, before being told to do a partial update. This failed a few times, and ended up freezing whilst reinstalling drivers. After a hard restart I continued to type whatever I could find on the forum into the console. At some point, the console started saying that I had 3 incorrect password inputs, and sudo commands stopped altogether. I found another thread discussing this; but people kept on suggesting changing passwords (which I did to no avail) or other things that made use of sudo (which I am locked out of, although I am technically the admin) I found myself somehow on the Ctrl+Alt+F1 console, and after being utterly confused (and Ctrl+AltF5 failing for me), another hard reset occurred. Somewhere along the way I created a USB start up for 14.04, (but this does not seem to work) Now I am left with an admin (and guest) account that log in but have blank screens (with only the desktop background showing) and I can't do anything in the console because I'm locked out. Interestingly, the console now says that I am running 14.04 although all updates said they had failed. Aside from the obvious lessons I have learnt (don't fiddle about in the console when you have no idea what you're doing "Dog wearing safety glasses "I have no idea what I am doing" GIF would be inserted here ) Is there any way I can redeem this almighty muck up? A million thanks for any help!

    Read the article

  • Multitenancy in SQL Azure

    - by cibrax
    If you are building a SaaS application in Windows Azure that relies on SQL Azure, it’s probably that you will need to support multiple tenants at database level. This is short overview of the different approaches you can use for support that scenario, A different database per tenant A new database is created and assigned when a tenant is provisioned. Pros Complete isolation between tenants. All the data for a tenant lives in a database only he can access. Cons It’s not cost effective. SQL Azure databases are not cheap, and the minimum size for a database is 1GB.  You might be paying for storage that you don’t really use. A different connection pool is required per database. Updates must be replicated across all the databases You need multiple backup strategies across all the databases Multiple schemas in a database shared by all the tenants A single database is shared among all the tenants, but every tenant is assigned to a different schema and database user. Pros You only pay for a single database. Data is isolated at database level. If the credentials for one tenant is compromised, the rest of the data for the other tenants is not. Cons You need to replicate all the database objects in every schema, so the number of objects can increase indefinitely. Updates must be replicated across all the schemas. The connection pool for the database must maintain a different connection per tenant (or set of credentials) A different user is required per tenant, which is stored at server level. You have to backup that user independently. Centralizing the database access with store procedures in a database shared by all the tenants A single database is shared among all the tenants, but nobody can read the data directly from the tables. All the data operations are performed through store procedures that centralize the access to the tenant data. The store procedures contain some logic to map the database user to an specific tenant. Pros You only pay for a single database. You only have a set of objects to maintain and backup. Cons There is no real isolation. All the data for the different tenants is shared in the same tables. You can not use traditional ORM like EF code first for consuming the data. A different user is required per tenant, which is stored at server level. You have to backup that user independently. SQL Federations A single database is shared among all the tenants, but a different federation is used per tenant. A federation in few words, it’s a mechanism for horizontal scaling in SQL Azure, which basically uses the idea of logical partitions to distribute data based on certain criteria. Pros You only have a single database with multiple federations. You can use filtering in the connections to pick the right federation, so any ORM could be used to consume the data. Cons There is no real isolation at that database level. The isolation is enforced programmatically with federations.

    Read the article

  • Testing and Validation – You Really Do Have The Time

    - by BuckWoody
    One of the great advantages in my role as a Technical Specialist here at Microsoft is that I get to work with so many great clients. I get to see their environments and how they use them, and the way they work with SQL Server. I’ve been a data professional myself for many years. Over that time I’ve worked with many database platforms, lots of client applications, and written a lot of code in many industries. For a while I was also a consultant, so I got to see how other shops did things as well. But because I now focus on a “set” base of clients (over 500 professionals in over 150 companies) I get to see them over a longer period of time. Many of them help me understand how they use the product in their projects, and I even attend some DBA regular meetings. I see the way the product succeeds, and I see when it fails. Something that has really impacted my way of thinking is the level of importance any given shop is able to place on testing and validation. I’ve always been a big proponent of setting up a test system and following a very disciplined regimen to make sure it will work in production for any new projects, and then taking the lessons learned into production as standards. I know, I know – there’s never enough time to do things right like this. Yet the shops I see that do it have the same level of work that they output as the shops that don’t. They just make the time to do the testing and validation and create a standard that they will follow in production. And what I’ve found (surprise surprise) is that they have fewer production problems. OK, that might seem obvious – but I’ve actually tracked it and those places that do the testing and best practices really do save stress, time and trouble from that effort. We all think that’s a good idea, but we just “don’t have time”. OK – but from what I’m seeing, you can gain time if you spend a little up front. You may find that you’re actually already spending the same amount of time that you would spend in doing the testing, you’re just doing it later, at night, under the gun. Food for thought.  Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Windows XP restarting repeatedly after a fresh restore

    - by DWilliams
    I don't normally deal with Windows but someone wanted me to fix their computer with Windows XP on it. It was just restarting every time it tried to boot. It would not go into safe mode, the result was the same regardless of the selected mode. The computer is like 4 years old and has been running the same installation for that entire time, so I figured the easiest solution was just to back up their files and re-install. I loaded the computer up with a live CD and copied their files off to a USB drive, then proceeded to run HP's "factory restore" feature (which I'm not particularly fond of, I'd rather have a disk to install from than reload all the crapware HP gets paid to install for you). It restored, and I put all their files back, installed their programs, and started the full windows update process. Everything seemed great so I left and told them what to do once it finished. A few hours pass, and my phone rings. Apparently it started doing the exact same thing as before once the updates finished. I don't have the computer sitting in front of me now so I can't really provide any more information than that. What could be causing this and, more importantly, how do I fix it? The fact that the same problem resurfaced after the restore makes me think it's either a hardware problem or an update breaking the computer.

    Read the article

  • Failing Windows Updates with Error Code 800719e4

    - by Kev
    On a number of Vista machines I have now come across the same error - when installing updates everything works fine, until it after it reboots and the rolls back during step 3. On all occasions (where a simple retry hasn't worked) the error code has been 800719e4. On my own laptop I have so far tried the following:- Installed the updates one by one manually - I started on the smallest and one by one have worked towards the largest one which has left me with "Security Update for Windows (KB2286198)" that refuses to install. Renamed all the files in "C:\Windows\Logs\CBS" to "xxx.old" where xxx was the original name with windows update turned off - no change Renamed all the folders in "C:\Windows\SoftwareDistribution" in the same manner - no change Attempted to install it manually "Windows6.0-KB2286198-x86.msu" - no change Tried to un-install IE8 - doesn't work, rolls back at the end (Installing the IE9 Beta when it launched was what alerted me to the issue on this laptop) Ran a "Fix It" thing from the Microsoft Website - no help (Can't find the link now). Tried to recover from the disk - but alas my laptop only has a recovery partition (and was unservice packed original). Ran with nothing running on startup, and only MS services - again no change. Google is being useless with a load of posts trying to get me to call a telephone number with letters in (presumably an American number) The error code appears to mean error log full but no one has any idea which log! The WinUpdate log does indicate the following is the error point though :- 2010-10-23 13:54:48:230 1240 738 Handler WARNING: Got extended error: "POQ Operation SetKeyValue OperationData \Registry\machine\Schema\wcm://Microsoft-Windows-shell32?version=6.0.6002.18287&language=neutral&processorArchitecture=x86&publicKeyToken=31bf3856ad364e35&versionScope=nonSxS&scope=allUsers\metadata\elements\HKEY_CLASSES_ROOT_lnkfile_shellex_DropHandler_defaultValue, @default, , ewAwADAAMAAyADEANAAwADEALQAwADAAMAAwAC0AMAAwADAAMAAtAEMAMAAwADAALQAwADAAMAAwADAAMAAwADAAMAAwADQANgB9AAAA" Has anyone any idea how to fix this once and for all - reinstalling laptop after laptop from scratch is mildly annoying at work where Office and Firefox are the only extras, but even more annoying at home - I don't fancy going through the palaver of reinstalling everything yet again.

    Read the article

  • Windows Feature List blank, Updates fail, system readiness tool says files are corrupt

    - by Chris T
    I tried to install IIS and to my surprise the feature/components lists was blank =[. I tried the system update readiness tool and it creates the following log: ================================= Checking System Update Readiness. Binary Version 6.1.7100.4104 Package Version 5.0 2009-09-30 23:38 Checking Deployment Packages Checking Package Manifests and catalogs. (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_1_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_1_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.cat (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_for_KB973540_RTM~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_for_KB973540_RTM~31bf3856ad364e35~amd64~~6.1.1.0.cat (f) CBS MUM Corrupt 0x800F0900 servicing\packages\Package_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.mum Line 1: (f) CBS Catalog Corrupt 0x800B0100 servicing\packages\Package_for_KB973540~31bf3856ad364e35~amd64~~6.1.1.0.cat Checking package watchlist. Checking component watchlist. Checking packages. Checking component store (f) CSI Catalog Corrupt 0x800B0003 winsxs\Catalogs\efdfd17ac9909b9d81e1455d9abf291319237877c23df8a67a3f5a1f2f9e034f.cat 5fbf0b9691b..6772f1b0a58_31bf3856ad364e35_6.1.7100.4127_c2160c1f90006ee6 (f) CSI Manifest All Zeros 0x00000000 WinSxS\Manifests\amd64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_35ba254677b2a294.manifest amd64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_35ba254677b2a294 (f) CSI Manifest All Zeros 0x00000000 WinSxS\Manifests\wow64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_400ecf98ac13648f.manifest wow64_microsoft-windows-mediaplayer-wmpdxm_31bf3856ad364e35_6.1.7100.4127_none_400ecf98ac13648f Summary: Seconds executed: 240 Found 9 errors CSI Manifest All Zeros Total Count: 2 CSI Catalog Corrupt Total Count: 1 CBS MUM Corrupt Total Count: 3 CBS Catalog Corrupt Total Count: 3 How can I fix this?

    Read the article

  • How can I determine what is killing my network adapter win7 or vista?

    - by datatoo
    I thought this issue was like another person here, and that downloading the nvidia chipset drivers was the solution. However that is not all that is going on. This machine had Vista 64bit and is now Win7. Same issue with both. I have explicitly been denying network driver updates since getting things working again and when a Windows updates occurs on seemingly benign Office updates the adapter fails to work. Is the update process somehow protecting this machine by turning off things and it fails to recover connectivity after a restart? All that seems to ever work is a system restore. Which does work. Since there are 25 pending updates asking to do there thing, I hate to think this is a one by one update test to find the culprit. Any ideas? This has an integrated nic, video, and I guess audio on the motherboard. ES5200 intel cpu on a gateway 4800-05e I am not quite sure how to determine the actual network adapter. This is a wired adapter. I suppose worst case I can try another adapter if this keeps happening.

    Read the article

  • VSDB to SSDT Part 2 : SQL Server 2008 Server Project &hellip; with SSDT

    - by Etienne Giust
    With Visual Studio 2012 and the use of SSDT technology, there is only one type of database project : SQL Server Database Project. With Visual Studio 2010, we used to have SQL Server 2008 Server Project which we used to define server-level objects, mostly logins and linked servers. A convenient wizard allowed for creation of this type of projects. It does not exists anymore. Here is how to create an equivalent of the SQL Server 2008 Server Project  with Visual Studio 2012: Create a new SQL Server Database Project : it will be created empty Create a new SQL Schema Compare ( SQL menu item > Schema Compare > New Schema Comparison ) As a source, select any database on the SQL server you want to mimic Set the target to be your newly Database Project In the Schema Compare options (cog-like icon), Object Types pane, set the options as below. You might want to tweak those and select only the object types you want. Then, run the comparison, review and select your changes and apply them to the project.

    Read the article

  • How do I filter out NaN FLOAT values in Teradata SQL?

    - by Paul Hooper
    With the Teradata database, it is possible to load values of NaN, -Inf, and +Inf into FLOAT columns through Java. Unfortunately, once those values get into the tables, they make life difficult when writing SQL that needs to filter them out. There is no IsNaN() function, nor can you "CAST ('NaN' as FLOAT)" and use an equality comparison. What I would like to do is, SELECT SUM(VAL**2) FROM DTM WHERE NOT ABS(VAL) > 1e+21 AND NOT VAL = CAST ('NaN' AS FLOAT) but that fails with error 2620, "The format or data contains a bad character.", specifically on the CAST. I've tried simply "... AND NOT VAL = 'NaN'", which also fails for a similar reason (3535, "A character string failed conversion to a numeric value."). I cannot seem to figure out how to represent NaN within the SQL statement. Even if I could represent NaN successfully in an SQL statement, I would be concerned that the comparison would fail. According to the IEEE 754 spec, NaN = NaN should evaluate to false. What I really seem to need is an IsNaN() function. Yet that function does not seem to exist.

    Read the article

  • Using SQL Alchemy and pyodbc with IronPython 2.6.1

    - by beargle
    I'm using IronPython and the clr module to retrieve SQL Server information via SMO. I'd like to retrieve/store this data in a SQL Server database using SQL Alchemy, but am having some trouble loading the pyodbc module. Here's the setup: IronPython 2.6.1 (installed at D:\Program Files\IronPython) CPython 2.6.5 (installed at D:\Python26) SQL Alchemy 0.6.1 (installed at D:\Python26\Lib\site-packages\sqlalchemy) pyodbc 2.1.7 (installed at D:\Python26\Lib\site-packages) I have these entries in the IronPython site.py to import CPython standard and third-party libraries: # Add CPython standard libs and DLLs import sys sys.path.append(r"D:\Python26\Lib") sys.path.append(r"D:\Python26\DLLs") sys.path.append(r"D:\Python26\lib-tk") sys.path.append(r"D:\Python26") # Add CPython third-party libs sys.path.append(r"D:\Python26\Lib\site-packages") # sqlite3 sys.path.append(r"D:\Python26\Lib\sqlite3") # Add SQL Server SMO sys.path.append(r"D:\Program Files\Microsoft SQL Server\100\SDK\Assemblies") import clr clr.AddReferenceToFile('Microsoft.SqlServer.Smo.dll') clr.AddReferenceToFile('Microsoft.SqlServer.SqlEnum.dll') clr.AddReferenceToFile('Microsoft.SqlServer.ConnectionInfo.dll') SQL Alchemy imports OK in IronPython, put I receive this error message when trying to connect to SQL Server: IronPython 2.6.1 (2.6.10920.0) on .NET 2.0.50727.3607 Type "help", "copyright", "credits" or "license" for more information. >>> import sqlalchemy >>> e = sqlalchemy.MetaData("mssql://") Traceback (most recent call last): File "<stdin>", line 1, in <module> File "D:\Python26\Lib\site-packages\sqlalchemy\schema.py", line 1780, in __init__ File "D:\Python26\Lib\site-packages\sqlalchemy\schema.py", line 1828, in _bind_to File "D:\Python26\Lib\site-packages\sqlalchemy\engine\__init__.py", line 241, in create_engine File "D:\Python26\Lib\site-packages\sqlalchemy\engine\strategies.py", line 60, in create File "D:\Python26\Lib\site-packages\sqlalchemy\connectors\pyodbc.py", line 29, in dbapi ImportError: No module named pyodbc This code works just fine in CPython, but it looks like the pyodbc module isn't accessible from IronPython. Any suggestions? I realize that this may not be the best way to approach the problem, so I'm open to tackling this a different way. Just wanted to get some experience with using SQL Alchemy and pyodbc.

    Read the article

  • pass username and password to get-credential or run sql query without using invoke-sqlcmd in Powersh

    - by Emo
    I am trying to connect to a remote sql database and simply run the "select @@servername" query in Powershell. I'm trying to do this without using integrated security. I've been struggling with "get-credential" and "invoke-sqlcmd", only to find (I think), that you can't pass the password from "get-credential" to another Powershell cmdlets. Here's the code I'm using: add-pssnapin sqlserverprovidersnapin100 add-pssnapin sqlservercmdletsnapin100 load assemblies [Reflection.Assembly]::Load("Microsoft.SqlServer.Smo, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.SqlEnum, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.SmoEnum, Version=9.0.242.0, Culture=neutral, PublicKeyToken=89845dcd8080cc91") [Reflection.Assembly]::Load("Microsoft.SqlServer.ConnectionInfo, Version=9.0.242.0, Culture=neutral,PublicKeyToken=89845dcd8080cc91") connect to SQL Server $serverName = "HLSQLSRV03" $server = New-Object -typeName Microsoft.SqlServer.Management.Smo.Server -argumentList $serverName login using SQL authentication $server.ConnectionContext.LoginSecure=$false; $credential = Get-Credential $userName = $credential.UserName -replace("\","") $pass = $credential.Password $server.ConnectionContext.set_Login($userName) $server.ConnectionContext.set_SecurePassword($credential.Password) $DB = "Master" invoke-sqlcmd -query "select @@Servername" -database $DB -serverinstance $servername -username $username -password $pass If if just hardcode the password in at the end of the "invoke-sqlcmd" line, it works. Is this because you can't use "get-credential" with "invoke-sqlcmd"? If so...what are my alternatives? Thanks so much for you help Emo

    Read the article

< Previous Page | 427 428 429 430 431 432 433 434 435 436 437 438  | Next Page >