Search Results

Search found 38117 results on 1525 pages for 'sql tools'.

Page 151/1525 | < Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >

  • Oracle SQL Developer version 3.2.2 Released

    - by thatjeffsmith
    This is another maintenance release, but I don’t want to minimize the work done in either the 3.2.1 or the 3.2.2 editions. The two releases include more than 400 bug fixes. Version 3.2 should be rocking and rolling and good to go while we work on the next major release! You can find the downloads and bug fixes in the normal places: Download 3.2.2 Bug fixes Connection Names If you downloaded and used version 3.2.1 and noticed some of your connection names were no longer valid due to ‘special’ characters, we’ve loosed our restrictions a bit for 3.2.2. You can now go back to using spaces and hyphens in your connection names. periods, spaces, hyphens should now all work More Copy & Paste Stuff While fixing a bug, the developer decided to also enhance the feature while he was in the code. I love seeing this happen organically. No one is sitting over their shoulder with the red magic marker. No, I’m too far away to do that except on very special days So here’s a ‘trick’ – if you want to copy cells from your grids, just drag the selected cells to the worksheet/editor. You’ll get a comma delimited list – very handy! Select cells, drag and drop up to the worksheet – Voila! Comma separated values

    Read the article

  • SQL Developer Data Modeler: On Notes, Comments, and Comments in RDBMS

    - by thatjeffsmith
    Ah the beautiful data model. They say a picture is worth a 1,000 words. And then we have our diagrams, how many words are they worth? Our friends from the Human Relations sample schema So our models describe how the data ‘works’ – whether that be at a logical-business level, or a technical-physical level. Developers like to say that their code is self-documenting. These would be very lazy or very bad (or both) developers. Models are the same way, you should document your models with comments and notes! I have 3 basic options: Comments Comments in RDBMS Notes So what’s the difference? Comments You’re describing the entity/table or attribute/column. This information will NOT be published in the database. It will only be available to the model, and hence, folks with access to the model. Table Comments (in the design only!) Comments in RDBMS You’re doing the same thing as above, but your words will be stored IN the data dictionary of the database. Oracle allows you to store comments on the table and column definitions. So your awesome documentation is going to be viewable to anyone with access to the database. RDBMS is an acronym for Relational Database Management System – of which Oracle is one of the first commercial examples If the DDL is produced and ran against a database, these comments WILL be stored in the data dictionary. Notes A place for you to add notes, maybe from a design meeting. Or maybe you’re using this as a to-do or requirements list. Basically it’s for anything that doesn’t literally describe the object at hand – that’s what the comments are for. I totally made these up. Now these are free text fields and you can put whatever you want here. Just make sure you put stuff here that’s worth reading. And it will live on…forever.

    Read the article

  • Can’t connect to SQL Server 2008 - looks like Shared Memory problem

    - by user38556
    I am unable to connect to my local instance of SQL Server 2008 Express using SQL Server Management Studio. I believe the problem is related to a change I made to the connection protocols. Before the error occurred, I had Shared Memory enabled and Named Pipes and TCP/IP disabled. I then enabled both Named Pipes and TCP/IP, and this is when I started experiencing the problem. When I try to connect to the server with SSMS (with either my SQL server sysadmin login or with windows authentication), I get the following error message: A connection was successfully established with the server, but then an error occurred during the login process. (provider: Named Pipes Provider, error: 0 - No process is on the other end of the pipe.) (Microsoft SQL Server, Error: 233) Why is it returning a Named Pipes error? Why would it not just use Shared Memory, as this has a higher priority order in the list of connection protocols? It seems like it is not listening on Shared Memory for some reason? When I set Named Pipes to enabled and try to connect, I get the same error message. My windows account is does not have administrator priviliges on my computer - perhaps this is making a difference in some way (as some of the discussions in this post about an "SuperSocketNetLib\Lpc" registry key seems to suggest). I have tried restarting the SQL Server service, by the way, and also tried to get someone to log onto the machine with an admin account to restart the SQL Server service. Still no luck.

    Read the article

  • SQL University: Database testing and refactoring tools and examples

    - by Mladen Prajdic
    This is a post for a great idea called SQL University started by Jorge Segarra also famously known as SqlChicken on Twitter. It’s a collection of blog posts on different database related topics contributed by several smart people all over the world. So this week is mine and we’ll be talking about database testing and refactoring. In 3 posts we’ll cover: SQLU part 1 - What and why of database testing SQLU part 2 - What and why of database refactoring SQLU part 3 - Database testing and refactoring tools and examples This is the third and last part of the series and in it we’ll take a look at tools we can test and refactor with plus some an example of the both. Tools of the trade First a few thoughts about how to go about testing a database. I'm firmily against any testing tools that go into the database itself or need an extra database. Unit tests for the database and applications using the database should all be in one place using the same technology. By using database specific frameworks we fragment our tests into many places and increase test system complexity. Let’s take a look at some testing tools. 1. NUnit, xUnit, MbUnit All three are .Net testing frameworks meant to unit test .Net application. But we can test databases with them just fine. I use NUnit because I’ve always used it for work and personal projects. One day this might change. So the thing to remember is to be flexible if something better comes along. All three are quite similar and you should be able to switch between them without much problem. 2. TSQLUnit As much as this framework is helpful for the non-C# savvy folks I don’t like it for the reason I stated above. It lives in the database and thus fragments the testing infrastructure. Also it appears that it’s not being actively developed anymore. 3. DbFit I haven’t had the pleasure of trying this tool just yet but it’s on my to-do list. From what I’ve read and heard Gojko Adzic (@gojkoadzic on Twitter) has done a remarkable job with it. 4. Redgate SQL Refactor and Apex SQL Refactor Neither of these refactoring tools are free, however if you have hardcore refactoring planned they are worth while looking into. I’ve only used the Red Gate’s Refactor and was quite impressed with it. 5. Reverting the database state I’ve talked before about ways to revert a database to pre-test state after unit testing. This still holds and I haven’t changed my mind. Also make sure to read the comments as they are quite informative. I especially like the idea of setting up and tearing down the schema for each test group with NHibernate. Testing and refactoring example We’ll take a look at the simple schema and data test for a view and refactoring the SELECT * in that view. We’ll use a single table PhoneNumbers with ID and Phone columns. Then we’ll refactor the Phone column into 3 columns Prefix, Number and Suffix. Lastly we’ll remove the original Phone column. Then we’ll check how the view behaves with tests in NUnit. The comments in code explain the problem so be sure to read them. I’m assuming you know NUnit and C#. T-SQL Code C# test code USE tempdbGOCREATE TABLE PhoneNumbers( ID INT IDENTITY(1,1), Phone VARCHAR(20))GOINSERT INTO PhoneNumbers(Phone)SELECT '111 222333 444' UNION ALLSELECT '555 666777 888'GO-- notice we don't have WITH SCHEMABINDINGCREATE VIEW vPhoneNumbersAS SELECT * FROM PhoneNumbersGO-- Let's take a look at what the view returns -- If we add a new columns and rows both tests will failSELECT *FROM vPhoneNumbers GO -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- refactor to split Phone column into 3 partsALTER TABLE PhoneNumbers ADD Prefix VARCHAR(3)ALTER TABLE PhoneNumbers ADD Number VARCHAR(6)ALTER TABLE PhoneNumbers ADD Suffix VARCHAR(3)GO-- update the new columnsUPDATE PhoneNumbers SET Prefix = LEFT(Phone, 3), Number = SUBSTRING(Phone, 5, 6), Suffix = RIGHT(Phone, 3)GO-- remove the old columnALTER TABLE PhoneNumbers DROP COLUMN PhoneGO-- This returns unexpected results!-- it returns 2 columns ID and Phone even though -- we don't have a Phone column anymore.-- Notice that the data is from the Prefix column-- This is a danger of SELECT *SELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will FAIL -- for a fix we have to call sp_refreshview -- to refresh the view definitionEXEC sp_refreshview 'vPhoneNumbers'-- after the refresh the view returns 4 columns-- this breaks the input/output behavior of the database-- which refactoring MUST NOT doSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will FAIL -- DoesViewReturnCorrectData test will FAIL -- to fix the input/output behavior change problem -- we have to concat the 3 columns into one named PhoneALTER VIEW vPhoneNumbersASSELECT ID, Prefix + ' ' + Number + ' ' + Suffix AS PhoneFROM PhoneNumbersGO-- now it works as expectedSELECT *FROM vPhoneNumbers -- DoesViewReturnCorrectColumns test will SUCCEED -- DoesViewReturnCorrectData test will SUCCEED -- clean upDROP VIEW vPhoneNumbersDROP TABLE PhoneNumbers [Test]public void DoesViewReturnCoorectColumns(){ // conn is a valid SqlConnection to the server's tempdb // note the SET FMTONLY ON with which we return only schema and no data using (SqlCommand cmd = new SqlCommand("SET FMTONLY ON; SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned schema: number of columns, column names and data types Assert.AreEqual(dt.Columns.Count, 2); Assert.AreEqual(dt.Columns[0].Caption, "ID"); Assert.AreEqual(dt.Columns[0].DataType, typeof(int)); Assert.AreEqual(dt.Columns[1].Caption, "Phone"); Assert.AreEqual(dt.Columns[1].DataType, typeof(string)); }} [Test]public void DoesViewReturnCorrectData(){ // conn is a valid SqlConnection to the server's tempdb using (SqlCommand cmd = new SqlCommand("SELECT * FROM vPhoneNumbers", conn)) { DataTable dt = new DataTable(); dt.Load(cmd.ExecuteReader(CommandBehavior.CloseConnection)); // test returned data: number of rows and their values Assert.AreEqual(dt.Rows.Count, 2); Assert.AreEqual(dt.Rows[0]["ID"], 1); Assert.AreEqual(dt.Rows[0]["Phone"], "111 222333 444"); Assert.AreEqual(dt.Rows[1]["ID"], 2); Assert.AreEqual(dt.Rows[1]["Phone"], "555 666777 888"); }}   With this simple example we’ve seen how a very simple schema can cause a lot of problems in the whole application/database system if it doesn’t have tests. Imagine what would happen if some outside process would depend on that view. It would get wrong data and propagate it silently throughout the system. And that is not good. So have tests at least for the crucial parts of your systems. And with that we conclude the Database Testing and Refactoring week at SQL University. Hope you learned something new and enjoy the learning weeks to come. Have fun!

    Read the article

  • SQL server queries are really slow only on first run

    - by JoelFan
    Somewhat strange problem... when I start my .NET app for the first time after rebooting my machine, the SQL Server queries are really slow... when I pause the debugger, I notice that it's hanging on getting the response from the query. This only happens when connecting to a remote SQL server (2008)... if I connect to one on my local machine, it's fine. Also, if I restart the app, it works fast, even off the remote SQL server, and subsequent runs are also fine. The only problem is when I connect to a remote SQL server for the first time after rebooting my machine. What's more, I have even noticed this same exact behavior with a 3rd party app (also .NET) that also connects to a remote SQL server. Another piece of info... this has only started hapenning since I upgraded my machine from XP to Win7 (64 bit). Also, other developers on my team who upgraded to Win7 are seeing the same behavior (both with the app we're developing and the 3rd party .NET app). (copied from http://stackoverflow.com/questions/2014814/sql-server-queries-are-really-slow-only-on-first-run )

    Read the article

  • Do you own your tools?

    - by Mike Brown
    A colleague of mine wrote a post a while ago asking Do you own your tools. It raises an important question. Do you? I answered way down in the comments. As an independent, I do own my tools. Even when I wasn't independent, I had my own (fully licensed) tools that I used for personal development. I don't think buying your own tools are something to puff your chest up about (just because you can buy a $100 pair of basketball sneakers they won't make you as good as Michael Jordan), but it IS an investment in yourself that shouldn't be taken lightly. What do you think good people?

    Read the article

  • Advanced Search Stored procedure

    - by Ray Eatmon
    So I am working on an MVC ASP.NET web application which centers around lots of data and data manipulation. PROBLEM OVERVIEW: We have an advanced search with 25 different filter criteria. I am using a stored procedure for this search. The stored procedure takes in parameters, filter for specific objects, and calculates return data from those objects. It queries large tables 14 millions records on some table, filtering and temp tables helped alleviate some of the bottle necks for those queries. ISSUE: The stored procedure used to take 1 min to run, which creates a timeout returning 0 results to the browser. I rewrote the procedure and got it down to 21 secs so the timeout does not occur. This ONLY occurs this slow the FIRST time the search is run, after that it takes like 5 secs. I am wondering should I take a different approach to this problem, should I worry about this type of performance issue if it does not timeout?

    Read the article

  • Copying & Pasting Rows Between Grids in SQL Developer

    - by thatjeffsmith
    Apologies for slacking on the blogging front here lately. Still mentally hung over from Open World, and lots of things going on behind the scenes here in Oracle-land. Whilst (love that word) blogging is part of my job, it’s not the ONLY part of my job So a super-quick and dirty ‘trick’ this morning. Copying Query Result Record as New Row in Table Copy and paste is something everyone ‘gets.’ I don’t know we have to thank for that, whether it’s Microsoft or Xerox, but it’s been ingrained in our way of dealing with all things computers. Almost to the detriment of some of our users – they’ll use Copy and Paste when perhaps our Export feature is superior, but I digress. Where it does work just fine is when you want to create a new row in your table that matches a row you have retrieved from an executed query. Just click in the gutter or row number to get the entire row selected Once you have your data selected, do your thing, i.e. ctrl+C or Command/Apple+C or whatever. Now open your view or table editor, go to the data page, and ask for a new row. New record, no data Paste in the data from the clipboard. It’s smart enough to paste the separate values out to the separate columns. The clipboard saves the day, again. If your columns orders are different, just change the order in the grids. If you have extra information, don’t copy the entire row. I know, I know – Jeff this is too simple, why are you wasting our time here? It seems intuitive, but how many of you actually tried this before reading it just now? I seem to get more positive feedback from the very basic user interface 101 tips than the esoteric click-click-click-ctrl-shift-click tricks I prefer to post. Lots of interesting stuff on tap, so stay tuned!

    Read the article

  • ETPM Environment Health Monitoring Tools

    - by Paula Speranza-Hadley
    This post is to provide some useful information about the tools typically used by Oracle ETPM implementations for performance tuning and analysis.   This includes tools to monitor and gather performance information and statistics on the Database, Application Server, and Client (browser).  Enterprise Monitoring Tools Oracle Enterprise Manager - OEM Grid Control comes with a comprehensive set of performance and health metrics that allow monitoring of key components in your environment such as applications, application servers, databases, as well as the back-end components on which they rely, such as hosts, operating systems and storage. Tools for the Database Oracle Diagnostics Pack Automatic Workload Repository (AWR)  - this tool gets statistics from memory abut the Time Model or DB Time, Wait Events, Active Session History and High Load SWL queries Automatic Database Diagnostic Monitor (ADDM) - This self-diagnostic software is built into the database.  It examines and analyzes data captured in AWR to dertermine possible performance issues.  It locates the root cause of the issue, provides recommendations for correcting the issues and qualifies the expected benefit. Oracle Database Tuning Pack SQL Tuning Advisor - This enables you to submit one or more SQL statements as input and receive output in the form of specific advice or recommendations on how to tune statements.  The recommendation relates to collection of statistics on objects, creation on new indexes and restructuring of SQL statements. SQL Access Advisor - This enables you to optimize data access paths of SQL queries by recommending a proper set of materialized views, indexes and partitions for a given SQL workload. Tools for the Application Server Weblogic Console - is a web-based, user interface used to configure and control a set of WebLogic servers or clusters (i.e. a "domain").  In any logical group of WebLogic servers there must exist one admin server, which hosts the WebLogic Admin console application and manages the associated configuratoin files. WebLogic Administrators will use the Administration Console for a number of tasks, including: Starting and stopping WebLogic servers or entire clusters. Configuring server parameters, security, database connections and deployed applications. Viewing server status, health and metrics. Yourkit for Profiling - helps analyze synchronization issues, including: Which threads were calling wait(), and for how long Which threads were blocked on attempt to acquire a monitor held by another thread (synchronized methods/blocks), and for how long Tools for the Client Fiddler - allows you to inspect traffic logs, debug and set breakpoints. Firebug – allows you to inspect and edit HTML, monitor network activity and debug JavaScript

    Read the article

  • SSMS Tools Pack 1.9.3 is out!

    - by Mladen Prajdic
    This release adds a great new feature and fixes a few bugs. The new feature called Window Content History saves the whole text in all all opened SQL windows every N minutes with the default being 30 minutes. This feature fixes the shortcoming of the Query Execution History which is saved only when the query is run. If you're working on a large script and never execute it, the existing Query Execution History wouldn't save it. By contrast the Window Content History saves everything in a .sql file so you can even open it in your SSMS. The Query Execution History and Window Content History files are correlated by the same directory and file name so when you search through the Query Execution History you get to see the whole saved Window Content History for that query. Because Window Content History saves data in simple searchable .sql files there isn't a special search editor built in. It is turned ON by default but despite the built in optimizations for space minimization, be careful to not let it fill your disk. You can see how it looks in the pictures in the feature list. The fixed bugs are: SSMS 2008 R2 slowness reported by few people. An object explorer context menu bug where it showed multiple SSMS Tools entries and showed wrong entries for a node. A datagrid bug in SQL snippets. Ability to read illegal XML characters from log files. Fixed the upper limit bug of a saved history text to 5 MB. A bug when searching through result sets prevents search. A bug with Text formatting erroring out for certain scripts. A bug with finding servers where it would return null even though servers existed. Run custom scripts objects had a bug where |SchemaName| didn't display the correct table schema for columns. This is fixed. Also |NodeName| and |ObjectName| values now show the same thing.   You can download the new version 1.9.3 here. Enjoy it!

    Read the article

  • Current wisdom on SQL Server and Hyperthreading?

    - by BradC
    Lots of articles out there (see Slava Oks's original SQL 2000 article and Kevin Kline's SQL 2005 update) recommend disabling hyperthreading on SQL servers, or at least testing your specific workload before enabling it on your servers. This issue is gradually becoming less relevant as true multi-core processors replace hyperthreaded ones, but what's the current wisdom on this issue? Does this advice change any with SQL 2005 64-bit, or SQL 2008, or Windows Server 2008? Ideally, this should be tested in advance in a staging environment, but what about for servers that have already made it into production with HT enabled? How can I tell if performance issues we're experiencing might be related to HT? Is there some specific combination of perfmon counters that might point me in that direction, as opposed to all the other things I normally pursue when working on improving SQL performance? Edit: This is especially attractive because of the potential for an across the board improvement for some of my high-cpu servers, but the client is going to want to see something concrete that helps me identify which servers really could benefit from disabling hyperthreading. Of course, conventional performance troubleshooting is ongoing, but sometimes any little bit helps.

    Read the article

  • DB DOC Enhancements for Oracle SQL Developer v4

    - by thatjeffsmith
    One of our more popular features is ‘DB Doc.’ It’s like JAVADOC for the database. Pick a connection, right-click, and go. It will generate an HTML documentation set for that schema. For version 4, we’ve introduced a few enhancements based on user requests. That’s right, you asked, and we listened. Added support for Package Bodies Added parallelization option for larger doc sets Enhanced the HTML formatting a bit Select Your Object Types and Generation Options We’ve changed the default selection of object types to be included and added support for package bodies There’s also an option to auto-open the documentation set after it’s been generated. And the HTML As Requested

    Read the article

  • Organizing Connections with Folders in Oracle SQL Developer

    - by thatjeffsmith
    How many Oracle databases do you work with on a regular basis? I’m guessing the answer for most of you lies between 1 and 500. This post is really geared for those of you who deal with more than just a handful (5) of database connections. Filters are nice when you need to work with a subset of table data, or even a list of tables. So why wouldn’t they be just as useful for organizing your connections? Here’s my complete list of databases: The folders aren’t there by default, you add them as you need them. Now this isn’t an overly large connection list. But when I need to fire up an impromptu demo for a customer, it’s very nice to be able to drill down into JUST those ‘safe’ environments. This actually saves me a few seconds every time I need to connect to one of my databases. So while it’s a very simple feature, it’s one of those things that I recommend EVERYONE take advantage of as it will save them hours of time over the long haul. Easier to find means I get to work a few seconds faster. This also helps me from making mistakes in ‘production’ environments! How to Add a Connection Folder Select a connection you want to organize. Mouse-right-click, and choose ‘Add to folder.’ You can throw it into a new container or an existing one. Lather, rinse, and repeat as necessary. The only trick is remembering to right-click! Special thanks to @dresendi for today’s topic! He asked how to do this and I realized I hadn’t blogged the topic yet

    Read the article

  • Very uneven CPU utilization with SQL Server 2012 on 2 processor computer with 16 cores / processor

    - by cooplarsh
    After installing SQL Server Enterprise 2012 with the Server + Cal license model, on a computer with 2 processors each with 16 cores (and no hyperthreading involved) and putting the server under extremely heavy load the 16 cores on the first processor were very underutilized, the first 4 cores on the 2nd CPU were heavily utilized, and the last 12 cores were not used at all (because of the 20 core limit for this sql server version). Total CPU utilization was displaying as around 25%. Unfortunately, the server suffered from extremely poor performance even though if the tasks were evenly distributed across the 20 cores it wouldn't have been anywhere near as bad. The Windows Server was running on a VMWare virtual image under ESX Server, but all of the CPU was allocated to the windows server. We tried changing affinity settings (e.g., allocating most cores to CPU and the others to I/O), but that didn't help solve the performance problems. Upgrading the product edition to SQL Server Enterprise Core 2012 not only allowed the SQL Server to utilize the 12 previously unused cores on the 2nd processor, but it also resulted in a much more even distribution of tasks across all of the processors. To get through the backlog of requests cpU utilization jumped to around 90%, and then came down to around 33% once it was caught up, but performance improved dramatically since we failed over to the newly updated version And the performance issues went away. I was wondering if anyone knows what might cause SQL Server to unevenly distribute the load, relying almost exclusively on the first 4 cores of the 2nd processor that had 12 cores idle, and allocate only a few tasks to each of the 16 cores on the first processor. Also, is there any way we could have more evenly distributed the load across the 20 cores that were being used without the product edition upgrade? The flip side of that question is what did the product upgrade do that caused SQL Server to start evenly distributing the load across all of the cores that it recognized? Thanks to any insight to answer these questions and/or links that might help me better understand how to make sense of what was happenings.

    Read the article

  • How to manage primary key while updating [migrated]

    - by Subin Jacob
    In the following table primaryKeyColumn is primary key. To maintain the data history I always uses the values with WHERE condition(WHERE StatusColumn=1) And will set the StatusColumn to 0 if the data is edited (So that I could keep the previous data). But the problem is, if I update it to 0 , I can't insert the same key to primarykeycolumn since the column validated for primary keys. How can I manage these kind of validations? what the mistake I did in this design? primaryKeyColumn ValueColumn StatusColumn ---------------- ----------- ------------ 2 Name1 1 3 Name2 1 4 Name3 0

    Read the article

  • Java Tools Support Offering Now Available

    - by Duncan Mills
    Great news! Developers can now purchase a combined support offering covering all three of Oracle's Java IDE options (JDeveloper, Oracle Enterprise Pack for Eclipse and NetBeans) in one premier support package. So no matter what tool, or mix of tools you use, you're covered! See Oracle Development Tools Support Offering for Details. A similar bundle is available for Oracle Solaris Development tools support which is detailed on the same page.

    Read the article

  • Can I replicate data between mySQL and SQL Server/SQL Azure?

    - by Ernest Mueller
    I have a replicated mySQL setup running happily on Amazon AWS, making user data available locally in various regions. Now I'm faced with an app that needs to go up on Microsoft Azure and I need to replicate the data over to there as well. So that's annoying. I am faced with several options: Replicate from mySQL to SQL Azure/SQL Server seems like it would be lovely - is this possible? I'd consider using a third party tool and paying $$ if I had to. We're not using anything complicated in the db feature set, it's just data in tables. Get mySQL working on Microsoft Azure - which seems really dicey at best. All the HOWTOs I can find say "this is possible but you really shouldn't try this for production apps." Go non-realtime and do syncs from mySQL to SQL Azure, which may be somewhat expensive and slower. Rip out all my mySQL on Amazon and use SQL Server there, which would make Baby Jesus cry. Has anyone gotten mySQL to SQL Azure/SQL Server replication or syncing working? Or have any other approaches (a NoSQL solution that replicates and might meet our but-we-need-to-join-some-tables needs that can easily be run on Amazon and Azure)?

    Read the article

  • SQL Developer: BLOBs and the External Editor

    - by thatjeffsmith
    We already know how easy it is to view images and plain text with the BLOB editor, yes? But what if I have in my column a bunch of PDFs stored? I want to see that stuff without having to save the file, finding it, and then opening it. Why can’t I just automatically open it directly from the database? Well, it seems you can. Here’s how. External Editors Step 1: Make sure you have the file types and associated editors defined in the preferences. External editors available from the BLOB viewer Based on what’s going on in your OS, you’ll have several of these already defined. If not, it’s pretty simple to add them manually. Now, assuming you’ve got some fun data loaded up, let’s try it out. A PDF As you can see in the screenshot above, PDF is mapped to Adobe Reader. I just happen to have a PDF loaded into a BLOB, let’s send it to the external editor. Click on the hyperlinked text to load the PDF straight to Adobe Here’s it working in action (click on the image to see the animation): If it’s a big file, you will see a dialog where we’re downloading the data. Now if I were to edit said document and save it back to the database via the ‘Load’ mechanism, then we’ve come full circle.

    Read the article

  • InfoPath Cannot Start Microsoft Visual Studio Tools for Applications

    - by ybbest
    When I am trying to access developer tools under developer tab in InfoPath Designer 2010 , I got this error InfoPath Cannot Start Microsoft Visual Studio Tools for Applications(See the screenshot below)     I got this error because , I do not install VSTA when I install office2010.To Install VSTA, you need to Launch Office 2010 setup from your Office 2010 installation media,choose the Add or Remove Features radio button in the installer then Set the Visual Studio Tools for Applications option to Run from My Computer and continue through the setup wizard. (See the screenshot below).Once this is done , you are ready to start VSTA for you InfoPath Form.

    Read the article

  • SQL Server Agent refuses to start

    - by Geo Ego
    I'm having a problem with SQL Server 2005 where the SQL Server Agent suddenly refuses to start. If I attempt to start it through Services, I get the error "SQL Server Agent (MSSQLSERVER) service on Local Computer started and then stopped." In the Application log, I have the following entry: Event Type: Error Event Source: SQLSERVERAGENT Event Category: Service Control Event ID: 103 Date: 5/20/2010 Time: 11:07:07 AM User: N/A Computer: SHAREPOINT Description: SQLServerAgent could not be started (reason: Unable to connect to server 'SHAREPOINT'; SQLServerAgent cannot start). This database has been running fine for four months. It contains a SharePoint configuration database, which two days ago stopped working, throwing me a message that the configuration database cannot be reached. It was then that I realized the SQL Server Agent was not running, and I have been unable to restart it. I have tried running it with both the local system account and the network service account, with the same results. So far, I have tried: Granting the administrators group, network service, and SharePoint SQL Server Agent account public and sysadmin roles on the database. Granting the administrators group, network service, and SharePoint SQL Server Agent account full permissions to the entire MSSQL directory and all files within. I still have no joy.

    Read the article

  • What are the Top Developer Productivity Tools/Plugins for Java in Eclipse?

    - by Ryan Hayes
    I personally use CodeRush in Visual Studio 2010 to do refactoring, write code faster with templates and generally navigate my code 10 times faster than stock VS. Recently, I've been working on another Android app and got to thinking...What are the top productivity plugins for Eclipse? Preferably free. I'm looking for plugins that help write in Java, not PHP or Rails or any of the other languages Eclipse supports.

    Read the article

  • Windows Phone 7 Series - Tools and Resources

    - by TechTwaddle
    Unless you've been living in the caves of Lascaux for the past couple of days, you probably know what's happening in the world of Windows Phone. Microsoft unveiled the developer tools required to develop applications and games for Windows Phone 7 at MIX10 a couple of days back. Silverlight and XNA being the major frameworks, no big surprise there. And the best news of all is that all the development tools are free! So if you are planning to develop apps for Windows Phone 7, read on. The first place, or more appropriately hub, for you is the Windows Phone Developer Portal. It has most of the information you need to get you started. Now there is a ton of information available at other places too. In this post, I take time to put all the information that I found useful at one place, and I'll keep updating this as and when I find new stuff.   Setting up the development environment 1. Install Windows Phone Developer Tools CTP (Community Technology Preview) This will install Visual Studio 2010 Express, Silverlight, XNA framework and emulator for Windows Phone 7. It also installs a few support tools. 2. Expression Blend 4 for Windows Phone:     - Install Expression Blend 4 beta     - Install Expression Blend Add-in Preview for Windows Phone     - Install Expression Blend SDK Preview for Windows Phone Installing the above tools should set your machine up for development. I installed the tools on my Windows Vista SP1 machine and the process went smoothly without running into any major hitch. Note that the tools won't install on Windows XP, read the release notes of the CTP. Resources and Documentation 1. Microsoft Windows Phone 7 Series Developer Training Kit 2. Programming Windows Phone 7 Series by Charles Petzold. Contains few chapters only. Gives a good preview. 3. MSDN documentation for Windows Phone 7 Development 4. A sample chapter from Learning Windows Phone Programming [PDF] by Yochay Kiriaty and Jaime Rodriguez. Complete book will be available at a later time. 5. Windows Phone 7 Developer Forum - where you can ask questions and problems you run into and the experts are there to help you. 6. For Silverlight visit silverlight.net and for XNA game development, the XNA Creators Club is the place to go, also make sure you follow Michael Klutcher's and Shawn Hargreaves' blog. 7. And finally the MIX'10 website. Most of the sessions will be available for download later (some are already available). Click on the Windows Phone tag to get all the session details and downloads.   If you are completely new to Silverlight and XNA (like me), and C# makes some sense to you then I suggest you go through the Developer Training Kit. It gives a good start and ramps you up pretty quickly.

    Read the article

  • SQL Table stored as a Heap - the dangers within

    - by MikeD
    Nearly all of the time I create a table, I include a primary key, and often that PK is implemented as a clustered index. Those two don't always have to go together, but in my world they almost always do. On a recent project, I was working on a data warehouse and a set of SSIS packages to import data from an OLTP database into my data warehouse. The data I was importing from the business database into the warehouse was mostly new rows, sometimes updates to existing rows, and sometimes deletes. I decided to use the MERGE statement to implement the insert, update or delete in the data warehouse, I found it quite performant to have a stored procedure that extracted all the new, updated, and deleted rows from the source database and dump it into a working table in my data warehouse, then run a stored proc in the warehouse that was the MERGE statement that took the rows from the working table and updated the real fact table. Use Warehouse CREATE TABLE Integration.MergePolicy (PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date, Operation varchar(5)) CREATE TABLE fact.Policy (PolicyKey int identity primary key, PolicyId int, PolicyTypeKey int, Premium money, Deductible money, EffectiveDate date) CREATE PROC Integration.MergePolicy as begin begin tran Merge fact.Policy as tgtUsing Integration.MergePolicy as SrcOn (tgt.PolicyId = Src.PolicyId) When not matched by Target then Insert (PolicyId, PolicyTypeKey, Premium, Deductible, EffectiveDate)values (src.PolicyId, src.PolicyTypeKey, src.Premium, src.Deductible, src.EffectiveDate) When matched and src.Operation = 'U' then Update set PolicyTypeKey = src.PolicyTypeKey,Premium = src.Premium,Deductible = src.Deductible,EffectiveDate = src.EffectiveDate When matched and src.Operation = 'D' then Delete ;delete from Integration.WorkPolicy commit end Notice that my worktable (Integration.MergePolicy) doesn't have any primary key or clustered index. I didn't think this would be a problem, since it was relatively small table and was empty after each time I ran the stored proc. For one of the work tables, during the initial loads of the warehouse, it was getting about 1.5 million rows inserted, processed, then deleted. Also, because of a bug in the extraction process, the same 1.5 million rows (plus a few hundred more each time) was getting inserted, processed, and deleted. This was being sone on a fairly hefty server that was otherwise unused, and no one was paying any attention to the time it was taking. This week I received a backup of this database and loaded it on my laptop to troubleshoot the problem, and of course it took a good ten minutes or more to run the process. However, what seemed strange to me was that after I fixed the problem and happened to run the merge sproc when the work table was completely empty, it still took almost ten minutes to complete. I immediately looked back at the MERGE statement to see if I had some sort of outer join that meant it would be scanning the target table (which had about 2 million rows in it), then turned on the execution plan output to see what was happening under the hood. Running the stored procedure again took a long time, and the plan output didn't show me much - 55% on the MERGE statement, and 45% on the DELETE statement, and table scans on the work table in both places. I was surprised at the relative cost of the DELETE statement, because there were really 0 rows to delete, but I was expecting to see the table scans. (I was beginning now to suspect that my problem was because the work table was being stored as a heap.) Then I turned on STATS_IO and ran the sproc again. The output was quite interesting.Table 'Worktable'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'Policy'. Scan count 0, logical reads 0, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0.Table 'MergePolicy'. Scan count 1, logical reads 433276, physical reads 60, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. I've reproduced the above from memory, the details aren't exact, but the essential bit was the very high number of logical reads on the table stored as a heap. Even just doing a SELECT Count(*) from Integration.MergePolicy incurred that sort of output, even though the result was always 0. I suppose I should research more on the allocation and deallocation of pages to tables stored as a heap, but I haven't, and my original assumption that a table stored as a heap with no rows would only need to read one page to answer any query was definitely proven wrong. It's likely that some sort of physical defragmentation of the table may have cleaned that up, but it seemed that the easiest answer was to put a clustered index on the table. After doing so, the execution plan showed a cluster index scan, and the IO stats showed only a single page read. (I aborted my first attempt at adding a clustered index on the table because it was taking too long - instead I ran TRUNCATE TABLE Integration.MergePolicy first and added the clustered index, both of which took very little time). I suspect I may not have noticed this if I had used TRUNCATE TABLE Integration.MergePolicy instead of DELETE FROM Integration.MergePolicy, since I'm guessing that the truncate operation does some rather quick releasing of pages allocated to the heap table. In the future, I will likely be much more careful to have a clustered index on every table I use, even the working tables. Mike  

    Read the article

  • Storing Attendance Data in database

    - by Ali Abbas
    So i have to store daily attendance of employees of my organisation from my application . The part where I need some help is, the efficient way to store attendance data. After some research and brain storming I came up with some approaches . Could you point me out which one is the best and any unobvious ill effects of the mentioned approaches. The approaches are as follows Create a single table for whole organisation and store empid,date,presentstatus as a row for every employee everyday. Create a single table for whole organisation and store a single row for each day with a comma delimited string of empids which are absent. I will generate the string on my application. Create different tables for each department and follow the 1 method. Please share your views and do mention any other good methods

    Read the article

< Previous Page | 147 148 149 150 151 152 153 154 155 156 157 158  | Next Page >