Search Results

Search found 84007 results on 3361 pages for 'sql system table'.

Page 327/3361 | < Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >

  • Need help with setting up MS-SQL on EC2.

    - by Hareem Haque
    I have a large MS-SQL database that i need to send to the aws cloud. The issue is how do i persist my sql data and how to setup MS-SQL cluster using windows AMI. The real issue is that for replication i need to use the private ip's of the instances. However, these ip's are always dynamic and will change on server launch. Any ideas on how i can get rid of this problem. I really appreciate your help Best Regards Hareem Haque

    Read the article

  • How do I set the UNC permissions on a SQL Server 2008 Filestream UNC?

    - by Justin Dearing
    I have a SQL Server 2008 instance. I have configured filestream access properly, and use it from one column on one table in one of my databases. However, I cannot access the UNC share for the filestream data. I have tried browsing to it as well as trying to open specific files and I get errors both ways. I am running SQL Server 2008 enterprise on a Windows 7 workstation running on the domain. I've tried running the sql server service as a local user, then as network admin. The user I am logged in as is a local admin and a sysadmin in SQL server.

    Read the article

  • Is it possible to clone system drive in Windows 7?

    - by Ladislav Mrnka
    My current problem is that my Window 7 system drive is unstable. I would like to try to clone this drive to the same type of disk (OCZ Vertex 2 120GB to OCZ Vertex 2 120GB) and replace the system drive with created clone. My installation doesn't have ProgramData and User profiles on the system drive. Later on (after warranty replacement of problematic drive), I would like to copy ProgramData and User profiles to different disk (Samsung SpinPoint 750GB to OCZ Vertex 2 120GB) and use the new disk instead. Note: data have only few GBs so there should not be any problem with the disk size. Is it possible? What is the best way to do that? Is it better to simply reinstall the system from scratch (I would like to avoid it)?

    Read the article

  • How may I retrieve data from an Excel table based on a variable number of criteria?

    - by Eshwar
    I have the following salary data for example: Country State 2012 2013 -> 2027 ======= ===== ==== ==== China Other 1000 1100 China Shanghai 1310 1400 China Tianjin 1450 1500 India Orissa 1500 1600 So now in another Excel sheet I would want an answer to one of the following questions: What is the salary in Shanghai for 2013? (Answer would be 1400) What is the salary in Hubei province for 2012? (Since it is not listed, use "Other" - 1000) What is the average salary in China for 2013? (Answer would be 1450) What is the highest salary in China for 2012? (Answer is Tianjin) So as in the above order of priority, I would like those numbers in another Excel sheet using some form of query. I considered PivotTables but I was wondering if there is another much better more efficient way of doing this? I imagine SQL is suited for this but I am not clued up on that. Some Excel functionality is much rather preferred. Also suggestions on an appropriate format of data for such queries would be appreciated.

    Read the article

  • Moving SQL Server databases from one drive to another?

    - by Michael Stum
    I have a SQL Server 2008 R2 on my machine which stores everything on the C: drive (C:\Program Files\Microsoft SQL Server\MSSQL10_50.MSSQLSERVER\MSSQL\DATA). I got an additional Hard Drive now and would like to move all the databases over. It's 26 databases, so I'd like to avoid manually disconnecting/reconnecting them. Ideally I would just like to move them from C: and D: and tell SQL Server to look there. Downtime is not an issue, I just don't want to do dozens of mouse clicks :)

    Read the article

  • what permissions are granted to a sql server database owner?

    - by Charles Hepner
    I have been trying to find out what permissions are granted to the owner of a database in SQL Server 2005 or higher. I have seen best practices questions like this one: What is the best practice for the database owner in SQL Server 2005? but I haven't been able to find anything specifically addressing what the purpose of having a database owner in SQL Server is and what permissions are granted as a result of making a given login a database owner. If the owner of the database is disabled, what would stop working?

    Read the article

  • Getting a Database into Source Control

    - by Grant Fritchey
    For any number of reasons, from simple auditing, to change tracking, to automated deployment, to integration with application development processes, you’re going to want to place your database into source control. Using Red Gate SQL Source Control this process is extremely simple. SQL Source Control works within your SQL Server Management Studio (SSMS) interface.  This means you can work with your databases in any way that you’re used to working with them. If you prefer scripts to using the GUI, not a problem. If you prefer using the GUI to having to learn T-SQL, again, that’s fine. After installing SQL Source Control, this is what you’ll see when you open SSMS:   SQL Source Control is now a direct piece of the SSMS environment. The key point initially is that I currently don’t have a database selected. You can even see that in the SQL Source Control window where it shows, in red, “No database selected – select a database in Object Explorer.” If I expand my Databases list in the Object Explorer, you’ll be able to immediately see which databases have been integrated with source control and which have not. There are visible differences between the databases as you can see here:   To add a database to source control, I first have to select it. For this example, I’m going to add the AdventureWorks2012 database to an instance of the SVN source control software (I’m using uberSVN). When I click on the AdventureWorks2012 database, the SQL Source Control screen changes:   I’m going to need to click on the “Link database to source control” text which will open up a window for connecting this database to the source control system of my choice.  You can pick from the default source control systems on the left, or define one of your own. I also have to provide the connection string for the location within the source control system where I’ll be storing my database code. I set these up in advance. You’ll need two. One for the main set of scripts and one for special scripts called Migrations that deal with different kinds of changes between versions of the code. Migrations help you solve problems like having to create or modify data in columns as part of a structural change. I’ll talk more about them another day. Finally, I have to determine if this is an isolated environment that I’m going to be the only one use, a dedicated database. Or, if I’m sharing the database in a shared environment with other developers, a shared database.  The main difference is, under a dedicated database, I will need to regularly get any changes that other developers have made from source control and integrate it into my database. While, under a shared database, all changes for all developers are made at the same time, which means you could commit other peoples work without proper testing. It all depends on the type of environment you work within. But, when it’s all set, it will look like this: SQL Source Control will compare the results between the empty folders in source control and the database, AdventureWorks2012. You’ll get a report showing exactly the list of differences and you can choose which ones will get checked into source control. Each of the database objects is scripted individually. You’ll be able to modify them later in the same way. Here’s the list of differences for my new database:   You can select/deselect all the objects or each object individually. You also get a report showing the differences between what’s in the database and what’s in source control. If there was already a database in source control, you’d only see changes to database objects rather than every single object. You can see that the database objects can be sorted by name, by type, or other choices. I’m going to add a comment such as “Initial creation of database in source control.” And then click on the Commit button which will put all the objects in my database into the source control system. That’s all it takes to get the objects into source control initially. Now is when things can get fun with breaking changes to code, automated deployments, unit testing and all the rest.

    Read the article

  • What is my HttpContext.GetLocalResourceObject Method Virtual Path?

    - by ARUNRAJ
    I have read http://msdn.microsoft.com/en-us/library/ms149953.aspx and need to verify what is my GetLocalResourceObject virtual path. My local resource files are located on my pc at: C:\inetpub\wwwroot\GlobalX\Input\App_LocalResources Within this folder are my resource files for all the languages that site handles (InputContactDetails.aspx.ro.resx, InputContactDetails.aspx.hi.resx, etc.), as well as the default resource file (InputContactDetails.aspx.resx). I am receiving an error when I attempt to implement the virtual path string. Below is my line of offending code: return '<%= HttpContext.GetLocalResourceObject("~/GlobalX/Input/App_LocalResources/InputContactDetails.aspx.resx", "ContactDetails.Text", new System.Globalization.CultureInfo("ro")) %>'; I have tried ~/GlobalX/Input/App_LocalResources as the virtual path, and several other permutations, but I get the same error. If someone could show what I am doing wrong, I would appreciate it greatly. Here is the error message I am getting: The resource class for this page was not found. Please check if the resource file exists and try again. Description: An unhandled exception occurred during the execution of the current web request. Please review the stack trace for more information about the error and where it originated in the code. Exception Details: System.InvalidOperationException: The resource class for this page was not found. Please check if the resource file exists and try again. Source Error: Line 410: function languageContactPromptPhone(var_lcs) { Line 411: if (var_lcs == "af") { Line 412: return '<%= HttpContext.GetLocalResourceObject("~/GlobalX/Input/App_LocalResources/InputContactDetails.aspx.resx", "ContactDetails.Text", new System.Globalization.CultureInfo("ro")) %'; Line 413: } Line 414: else if (var_lcs == "sq") { Source File: c:\inetpub\wwwroot\GlobalX\Input\InputContactDetails.aspx Line: 412 Stack Trace: [InvalidOperationException: The resource class for this page was not found. Please check if the resource file exists and try again.] System.Web.Compilation.LocalResXResourceProvider.CreateResourceManager() +2785818 System.Web.Compilation.BaseResXResourceProvider.EnsureResourceManager() +24 System.Web.Compilation.BaseResXResourceProvider.GetObject(String resourceKey, CultureInfo culture) +15 System.Web.Compilation.ResourceExpressionBuilder.GetResourceObject(IResourceProvider resourceProvider, String resourceKey, CultureInfo culture, Type objType, String propName) +23 System.Web.HttpContext.GetLocalResourceObject(String virtualPath, String resourceKey, CultureInfo culture) +38 ASP.input_inputcontactdetails_aspx.__RenderContentInputContactDetails(HtmlTextWriter __w, Control parameterContainer) in c:\inetpub\wwwroot\GlobalX\Input\InputContactDetails.aspx:412 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +109 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +8 System.Web.UI.Control.Render(HtmlTextWriter writer) +10 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +8991378 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +208 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +8 System.Web.UI.Control.Render(HtmlTextWriter writer) +10 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +8991378 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +208 System.Web.UI.UpdatePanel.RenderChildren(HtmlTextWriter writer) +256 System.Web.UI.UpdatePanel.Render(HtmlTextWriter writer) +37 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +8991378 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 ASP.masterpages_masterinput_master.__RenderformMasterInput(HtmlTextWriter __w, Control parameterContainer) in c:\inetpub\wwwroot\GlobalX\MasterPages\MasterInput.master:140 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +109 System.Web.UI.HtmlControls.HtmlForm.RenderChildren(HtmlTextWriter writer) +173 System.Web.UI.HtmlControls.HtmlContainerControl.Render(HtmlTextWriter writer) +31 System.Web.UI.HtmlControls.HtmlForm.Render(HtmlTextWriter output) +53 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +8991378 System.Web.UI.HtmlControls.HtmlForm.RenderControl(HtmlTextWriter writer) +40 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +208 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +8 System.Web.UI.Control.Render(HtmlTextWriter writer) +10 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +8991378 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Control.RenderChildrenInternal(HtmlTextWriter writer, ICollection children) +208 System.Web.UI.Control.RenderChildren(HtmlTextWriter writer) +8 System.Web.UI.Page.Render(HtmlTextWriter writer) +29 System.Web.UI.Control.RenderControlInternal(HtmlTextWriter writer, ControlAdapter adapter) +27 System.Web.UI.Control.RenderControl(HtmlTextWriter writer, ControlAdapter adapter) +8991378 System.Web.UI.Control.RenderControl(HtmlTextWriter writer) +25 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +3060

    Read the article

  • T-SQL Tuesday #21 - Crap!

    - by Most Valuable Yak (Rob Volk)
    Adam Machanic's (blog | twitter) ever popular T-SQL Tuesday series is being held on Wednesday this time, and the topic is… SHIT CRAP. No, not fecal material.  But crap code.  Crap SQL.  Crap ideas that you thought were good at the time, or were forced to do due (doo-doo?) to lack of time. The challenge for me is to look back on my SQL Server career and find something that WASN'T crap.  Well, there's a lot that wasn't, but for some reason I don't remember those that well.  So the additional challenge is to pick one particular turd that I really wish I hadn't squeezed out.  Let's see if this outline fits the bill: An ETL process on text files; That had to interface between SQL Server and an AS/400 system; That didn't use SSIS (should have) or BizTalk (ummm, no) but command-line scripting, using Unix utilities(!) via: xp_cmdshell; That had to email reports and financial data, some of it sensitive Yep, the stench smell is coming back to me now, as if it was yesterday… As to why SSIS and BizTalk were not options, basically I didn't know either of them well enough to get the job done (and I still don't).  I also had a strict deadline of 3 days, in addition to all the other responsibilities I had, so no time to learn them.  And seeing how screwed up the rest of the process was: Payment files from multiple vendors in multiple formats; Sent via FTP, PGP encrypted email, or some other wizardry; Manually opened/downloaded and saved to a particular set of folders (couldn't change this); Once processed, had to be placed BACK in the same folders with the original archived; x2 divisions that had to run separately; Plus an additional vendor file in another format on a completely different schedule; So that they could be MANUALLY uploaded into the AS/400 system (couldn't change this either, even if it was technically possible) I didn't feel so bad about the solution I came up with, which was naturally: Copy the payment files to the local SQL Server drives, using xp_cmdshell Run batch files (via xp_cmdshell) to parse the different formats using sed, a Unix utility (this was before Powershell) Use other Unix utilities (join, split, grep, wc) to process parsed files and generate metadata (size, date, checksum, line count) Run sqlcmd to execute a stored procedure that passed the parsed file names so it would bulk load the data to do a comparison bcp the compared data out to ANOTHER text file so that I could grep that data out of the original file Run another stored procedure to import the matched data into SQL Server so it could process the payments, including file metadata Process payment batches and log which division and vendor they belong to Email the payment details to the finance group (since it was too hard for them to run a web report with the same data…which they ran anyway to compare the emailed file against…which always matched, surprisingly) Email another report showing unmatched payments so they could manually void them…about 3 months afterward All in "Excel" format, using xp_sendmail (SQL 2000 system) Copy the unmatched data back to the original folder locations, making sure to match the file format exactly (if you've ever worked with ACH files, you'll understand why this sucked) If you're one of the 10 people who have read my blog before, you know that I love the DOS "for" command.  Like passionately.  Like fairy-tale love.  So my batch files were riddled with for loops, nested within other for loops, that called other batch files containing for loops.  I think there was one section that had 4 or 5 nested for commands.  It was wrong, disturbed, and completely un-maintainable by anyone, even myself.  Months, even a year, after I left the company I got calls from someone who had to make a minor change to it, and they called me to talk them out of spraying the office with an AK-47 after looking at this code.  (for you Star Trek TOS fans) The funniest part of this, well, one of the funniest, is that I made the deadline…sort of, I was only a day late…and the DAMN THING WORKED practically unchanged for 3 years.  Most of the problems came from the manual parts of the overall process, like forgetting to decrypt the files, or missing/late files, or saved to the wrong folders.  I'm definitely not trying to toot my own horn here, because this was truly one of the dumbest, crappiest solutions I ever came up with.  Fortunately as far as I know it's no longer in use and someone has written a proper replacement.  Today I would knuckle down and do it in SSIS or Powershell, even if it took me weeks to get it right. The real lesson from this crap code is to make things MAINTAINABLE and UNDERSTANDABLE.  sed scripting regular expressions doesn't fit that criteria in any way.  If you ever find yourself under pressure to do something fast at all costs, DON'T DO IT.  Stop and consider long-term maintainability, not just for yourself but for others on your team.  If you can't explain the basic approach in under 5 minutes, it ultimately won't succeed.  And while you may love to leave all that crap behind, it may follow you anyway, and you'll step in it again.   P.S. - if you're wondering about all the manual stuff that couldn't be changed, it was because the entire process had gone through Six Sigma, and was deemed the best possible way.  Phew!  Talk about stink!

    Read the article

  • Using openrowset to read an Excel file into a temp table; how do I reference that table?

    - by mattstuehler
    I'm trying to write a stored procedure that will read an Excel file into a temp table, then massage some of the data in that table, then insert selected rows from that table into a permanent table. So, it starts like this: SET @SQL = "select * into #mytemptable FROM OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database="+@file+";HDR=YES', 'SELECT * FROM [Sheet1$]')" EXEC (@SQL) That much seems to work. However, if I then try something like this: Select * from #mytemptable I get an error: Invalid object name '#mytemptable' Why isn't #mytemptable recognized? Is there a way to have #mytemptable accessible to the rest of the stored procedure? Many thanks in advance!

    Read the article

  • How to write code that communicates with an accelerator in the real address space (real mode)?

    - by ysap
    This is a preliminary question for the issue, where I was asked to program a host-accelerator program on an embedded system we are building. The system is comprised of (among the standard peripherals) an ARM core and an accelerator processor. Both processors access the system bus via their bus interfaces, and share the same 32-bit global physical memory space. Both share access to the system's DRAM through the system bus. (The computer concept is similar to Beagleboard/raspberry Pie, but with a specialized accelerator added) The accelerator has its own internal memory (SRAM) which is exposed to the system and occupies a portion of the global address space (as opposed to how a graphics card would talk to teh CPU via a "small" aperture in the system memory space). On the ARM core (the host) we plan on running Ubuntu 12.04. The mode of operation of communicating between the processors should be that the host issues memory transactions on the system bus that are targeted at the accelerator internal memory. As far as my understanding goes, if I write a program for the host that simply writes to the physical address of the accelerator, most chances are that the program will crash due to a segmentation violation. So, I assume that I need some way of communicating with the device in real mode. What is the easiest way to achieve this mode of operation?

    Read the article

  • Create a system image in Windows 8

    - by Greg Low
    One of the things that I've just come to accept is that the designers of Windows 8 and I think very differently.It'll take a long time to convince me that shutting down the computer is a "setting". Even after using Windows 8 for quite a while now, I still find that I struggle nearly every day, just trying to do things that I previously knew how to do. That's just not a good thing.Today I decided to create a system image as I hadn't made one lately. I started in Control Panel looking for Backup options. That yielded nothing except programs that wanted to "Save backup copies of my files with file history". I thought "oh well, let's just try the new search options". I hit the Windows key and typed "Backup". No, nothing came up there either.I searched again all over the Control Panel options to no avail.So it was time to hit Google again. Once again, clearly lots of people used to know how to do this and have been trying to work out where this option went.The first trick is that there are a bunch of Control Panel options that don't appear in the Control Panel. In the address bar at the top, if you click on Control Panel, you'll find there is an option that says "All Control Panel Options". That is curious given that's where I thought I was when I opened Control Panel. No hint is given on that screen that there are a bunch of hidden options. None the less, I then checked out "all" the options.The option that you need to create a system image in Windows 8 turns out to be the "Windows 7 File Recovery" option that appears in this extended list. Why does it say "Windows 7" when it's for "Windows 8" as well and I'm running "Windows 8"? Why do I have to choose an option that says "File Recovery" to create a system image backup?<sigh>But at least I've recorded it here for the next time I forget where to find it.

    Read the article

  • The null value cannot be assigned to a member with type System.Int64 which is a non-nullable value t

    - by BritishDeveloper
    I'm getting the following error in my MVC2 app using Linq to SQL (I am new to both). I am connected to an actual SQL server not weird mdf: System.InvalidOperationException The null value cannot be assigned to a member with type System.Int64 which is a non-nullable value type My SQL table has a column called MessageID. It is BigInt type and has a primary key, NOT NULL and an IDENTITY 1 1, no Default In my dbml designer it has the following declaration for this field: [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_MessageId", AutoSync=AutoSync.OnInsert, DbType="BigInt NOT NULL IDENTITY", IsPrimaryKey=true, IsDbGenerated=true)] public long MessageId { get { return this._MessageId; } set { if ((this._MessageId != value)) { this.OnMessageIdChanging(value); this.SendPropertyChanging(); this._MessageId = value; this.SendPropertyChanged("MessageId"); this.OnMessageIdChanged(); } } } It keeps telling me that null cannot be assigned - I'm not passing through null! It's a long - it can't even be null! Am I doing something stupid? I can't find a solution anywhere! I made this work by changing the type of this property to Nullable<long> but surely this can't be right? Update: I am using InsertOnSubmit. Simplified code: public ActionResult Create(Message message) { if (ModelState.IsValid) { var db = new MessagingDataContext(); db.Messages.InsertOnSubmit(message); db.SubmitChanges(); //line 93 (where it breaks) } } breaks on SubmitChanges() with the error at the top of this question. Update2: Stack trace: at Read_Object(ObjectMaterializer`1 ) at System.Data.Linq.SqlClient.ObjectReaderCompiler.ObjectReader`2.MoveNext() at System.Linq.Enumerable.FirstOrDefault[TSource](IEnumerable`1 source) at System.Data.Linq.ChangeDirector.StandardChangeDirector.DynamicInsert(TrackedObject item) at System.Data.Linq.ChangeDirector.StandardChangeDirector.Insert(TrackedObject item) at System.Data.Linq.ChangeProcessor.SubmitChanges(ConflictMode failureMode) at System.Data.Linq.DataContext.SubmitChanges(ConflictMode failureMode) at Qanda.Controllers.MessagingController.Ask(Message message) in C:\Qanda\Qanda\Controllers\MessagingController.cs:line 93 Update3: No one knows and I don't have enough clout to offer a bounty! So continued on my ASP.NET blog. Please help!

    Read the article

  • Configuration "diff" across Oracle WebCenter Sites instances

    - by Mark Fincham-Oracle
    Problem Statement With many Oracle WebCenter Sites environments - how do you know if the various configuration assets and settings are in sync across all of those environments? Background At Oracle we typically have a "W" shaped set of environments.  For the "Production" environments we typically have a disaster recovery clone as well and sometimes additional QA environments alongside the production management environment. In the case of www.java.com we have 10 different environments. All configuration assets/settings (CSElements, Templates, Start Menus etc..) start life on the Development Management environment and are then published downstream to other environments as part of the software development lifecycle. Ensuring that each of these 10 environments has the same set of Templates, CSElements, StartMenus, TreeTabs etc.. is impossible to do efficiently without automation. Solution Summary  The solution comprises of two components. A JSON data feed from each environment. A simple HTML page that consumes these JSON data feeds.  Data Feed: Create a JSON WebService on each environment. The WebService is no more than a SiteEntry + CSElement. The CSElement queries various DB tables to obtain details of the assets/settings returning this data in a JSON feed. Report: Create a simple HTML page that uses JQuery to fetch the JSON feed from each environment and display the results in a table. Since all assets (CSElements, Templates etc..) are published between environments they will have the same last modified date. If the last modified date of an asset is different in the JSON feed or is mising from an environment entirely then highlight that in the report table. Example Solution Details Step 1: Create a Site Entry + CSElement that outputs JSON Site Entry & CSElement Setup  The SiteEntry should be uncached so that the most recent configuration information is returned at all times. In the CSElement set the contenttype accordingly: Step 2: Write the CSElement Logic The basic logic, that we repeat for each asset or setting that we are interested in, is to query the DB using <ics:sql> and then loop over the resultset with <ics:listloop>. For example: <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> "templates": [ <ics:listloop listname="TemplateList"> {"name":"<ics:listget listname="TemplateList"  fieldname="name"/>", "modified":"<ics:listget listname="TemplateList"  fieldname="updateddate"/>"}, </ics:listloop> ], A comprehensive list of SQL queries to fetch each configuration asset/settings can be seen in the appendix at the end of this article. For the generation of the JSON data structure you could use Jettison (the library ships with the 11.1.1.8 version of the product), native Java 7 capabilities or (as the above example demonstrates) you could roll-your-own JSON output but that is not advised. Step 3: Create an HTML Report The JavaScript logic looks something like this.. 1) Create a list of JSON feeds to fetch: ENVS['dev-mgmngt'] = 'http://dev-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json'; ENVS['dev-dlvry'] = 'http://dev-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-mngmnt'] = 'http://test-mngmnt.example.com/sites/ContentServer?d=&pagename=settings.json';  ENVS['test-dlvry'] = 'http://test-dlvry.example.com/sites/ContentServer?d=&pagename=settings.json';   2) Create a function to get the JSON feeds: function getDataForEnvironment(url){ return $.ajax({ type: 'GET', url: url, dataType: 'jsonp', beforeSend: function (jqXHR, settings){ jqXHR.originalEnv = env; jqXHR.originalUrl = url; }, success: function(json, status, jqXHR) { console.log('....success fetching: ' + jqXHR.originalUrl); // store the returned data in ALLDATA ALLDATA[jqXHR.originalEnv] = json; }, error: function(jqXHR, status, e) { console.log('....ERROR: Failed to get data from [' + url + '] ' + status + ' ' + e); } }); } 3) Fetch each JSON feed: for (var env in ENVS) { console.log('Fetching data for env [' + env +'].'); var promisedData = getDataForEnvironment(ENVS[env]); promisedData.success(function (data) {}); }  4) For each configuration asset or setting create a table in the report For example, CSElements: 1) Get a list of unique CSElement names from all of the returned JSON data. 2) For each unique CSElement name, create a row in the table  3) Select 1 environment to represent the master or ideal state (e.g. "Everything should be like Production Delivery") 4) For each environment, compare the last modified date of this envs CSElement to the master. Highlight any differences in last modified date or missing CSElements. 5) Repeat...    Appendix This section contains various SQL statements that can be used to retrieve configuration settings from the DB.  Templates  <ics:sql sql="SELECT name,updateddate FROM Template WHERE status != 'VO'" listname="TemplateList" table="Template" /> CSElements <ics:sql sql="SELECT name,updateddate FROM CSElement WHERE status != 'VO'" listname="CSEList" table="CSElement" /> Start Menus <ics:sql sql="select sm.id, sm.cs_name, sm.cs_description, sm.cs_assettype, sm.cs_assetsubtype, sm.cs_itemtype, smr.cs_rolename, p.name from StartMenu sm, StartMenu_Sites sms, StartMenu_Roles smr, Publication p where sm.id=sms.ownerid and sm.id=smr.cs_ownerid and sms.pubid=p.id order by sm.id" listname="startList" table="Publication,StartMenu,StartMenu_Roles,StartMenu_Sites"/>  Publishing Configurations <ics:sql sql="select id, name, description, type, dest, factors from PubTarget" listname="pubTargetList" table="PubTarget" /> Tree Tabs <ics:sql sql="select tt.id, tt.title, tt.tooltip, p.name as pubname, ttr.cs_rolename, ttsect.name as sectname from TreeTabs tt, TreeTabs_Roles ttr, TreeTabs_Sect ttsect,TreeTabs_Sites ttsites LEFT JOIN Publication p  on p.id=ttsites.pubid where p.id is not null and tt.id=ttsites.ownerid and ttsites.pubid=p.id and tt.id=ttr.cs_ownerid and tt.id=ttsect.ownerid order by tt.id" listname="treeTabList" table="TreeTabs,TreeTabs_Roles,TreeTabs_Sect,TreeTabs_Sites,Publication" />  Filters <ics:sql sql="select name,description,classname from Filters" listname="filtersList" table="Filters" /> Attribute Types <ics:sql sql="select id,valuetype,name,updateddate from AttrTypes where status != 'VO'" listname="AttrList" table="AttrTypes" /> WebReference Patterns <ics:sql sql="select id,webroot,pattern,assettype,name,params,publication from WebReferencesPatterns" listname="WebRefList" table="WebReferencesPatterns" /> Device Groups <ics:sql sql="select id,devicegroupsuffix,updateddate,name from DeviceGroup" listname="DeviceList" table="DeviceGroup" /> Site Entries <ics:sql sql="select se.id,se.name,se.pagename,se.cselement_id,se.updateddate,cse.rootelement from SiteEntry se LEFT JOIN CSElement cse on cse.id = se.cselement_id where se.status != 'VO'" listname="SiteList" table="SiteEntry,CSElement" /> Webroots <ics:sql sql="select id,name,rooturl,updatedby,updateddate from WebRoot" listname="webrootList" table="WebRoot" /> Page Definitions <ics:sql sql="select pd.id, pd.name, pd.updatedby, pd.updateddate, pd.description, pdt.attributeid, pa.name as nameattr, pdt.requiredflag, pdt.ordinal from PageDefinition pd, PageDefinition_TAttr pdt, PageAttribute pa where pd.status != 'VO' and pa.id=pdt.attributeid and pdt.ownerid=pd.id order by pd.id,pdt.ordinal" listname="pageDefList" table="PageDefinition,PageAttribute,PageDefinition_TAttr" /> FW_Application <ics:sql sql="select id,name,updateddate from FW_Application where status != 'VO'" listname="FWList" table="FW_Application" /> Custom Elements <ics:sql sql="select elementname from ElementCatalog where elementname like 'CustomElements%'" listname="elementList" table="ElementCatalog" />

    Read the article

  • VBA - Access 03 - Iterating through a list box, with an if statement to evaluate

    - by Justin
    So I have a one list box with values like DeptA, DeptB, DeptC & DeptD. I have a method that causes these to automatically populate in this list box if they are applicable. So in other words, if they populate in this list box, I want the resulting logic to say they are "Yes" in a boolean field in the table. So to accomplish this I am trying to use this example of iteration to cycle through the list box first of all, and it works great: dim i as integer dim myval as string For i = o to me.lstResults.listcount - 1 myVal = lstResults.itemdata(i) Next i if i debug.print myval, i get the list of data items that i want from the list box. so now i am trying to evaluate that list so that I can have an UPDATE SQL statement to update the table as i need it to be done. so, i know this is a mistake, but this is what i tried to do (giving it as an example so that you can see what i am trying to get to here) dim sql as string dim i as integer dim myval as string dim db as database sql = "UPDATE tblMain SET " for i = 0 to me.lstResults.listcount - 1 myval = lstResults.itemdata(i) If MyVal = "DeptA" Then sql = sql & "DeptA = Yes" ElseIF myval = "DeptB" Then sql = sql & "DeptB = Yes" ElseIf MyVal = "DeptC" Then sql = sql & "DeptC = Yes" ElseIf MyVal = "DeptD" Then sql = sql & "DeptD = Yes" End If Next i debug.print (sql) sql = sql & ";" set db= currentdb db.execute(sql) msgbox "Good Luck!" So you can see why this is going to cause problems because the listbox that these values (DeptA, DeptB, etc) automatically populate in are dynamic....there is rarely one value in the listbox, and the list of values changes per OrderID (what the form I am using this on populates information for in the first place; unique instance). I am looking for something that will evaluate this list one at a time (i.e. iterate through the list of values, and look for "DeptA", and if it is found add yes to the SQL string, and if it not add no to the SQL string, then march on to the next iteration). Even though the listbox populates values dynamically, they are set values, meaning i know what could end up in it. Thanks for any help, Justin

    Read the article

  • Database - Designing an "Events" Table

    - by Alix Axel
    After reading the tips from this great Nettuts+ article I've come up with a table schema that would separate highly volatile data from other tables subjected to heavy reads and at the same time lower the number of tables needed in the whole database schema, however I'm not sure if this is a good idea since it doesn't follow the rules of normalization and I would like to hear your advice, here is the general idea: I've four types of users modeled in a Class Table Inheritance structure, in the main "user" table I store data common to all the users (id, username, password, several flags, ...) along with some TIMESTAMP fields (date_created, date_updated, date_activated, date_lastLogin, ...). To quote the tip #16 from the Nettuts+ article mentioned above: Example 2: You have a “last_login” field in your table. It updates every time a user logs in to the website. But every update on a table causes the query cache for that table to be flushed. You can put that field into another table to keep updates to your users table to a minimum. Now it gets even trickier, I need to keep track of some user statistics like how many unique times a user profile was seen how many unique times a ad from a specific type of user was clicked how many unique times a post from a specific type of user was seen and so on... In my fully normalized database this adds up to about 8 to 10 additional tables, it's not a lot but I would like to keep things simple if I could, so I've come up with the following "events" table: |------|----------------|----------------|--------------|-----------| | ID | TABLE | EVENT | DATE | IP | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190030 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 1 | user | login | 201004190230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | created | 201004190031 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | activated | 201004190234 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | approved | 201004190930 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | login | 201004191200 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | created | 201004191230 | 127.0.0.1 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | impressed | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 15 | user_ads | clicked | 201004191231 | 127.0.0.2 | |------|----------------|----------------|--------------|-----------| | 2 | user | blocked | 201004200319 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| | 2 | user | deleted | 201004200320 | 217.0.0.1 | |------|----------------|----------------|--------------|-----------| Basically the ID refers to the primary key (id) field in the TABLE table, I believe the rest should be pretty straightforward. One thing that I've come to like in this design is that I can keep track of all the user logins instead of just the last one, and thus generate some interesting metrics with that data. Due to the nature of the events table I also thought of making some optimizations, such as: #9: Since there is only a finite number of tables and a finite (and predetermined) number of events, the TABLE and EVENTS columns could be setup as ENUMs instead of VARCHARs to save some space. #14: Store IPs as UNSIGNED INT with INET_ATON() instead of VARCHARs. Store DATEs as TIMESTAMPs instead of DATETIMEs. Use the ARCHIVE (or the CSV?) engine instead of InnoDB / MyISAM. Overall, each event would only consume 14 bytes which is okay for my traffic I guess. Pros: Ability to store more detailed data (such as logins). No need to design (and code for) almost a dozen additional tables (dates and statistics). Reduces a few columns per table and keeps volatile data separated. Cons: Non-relational (still not as bad as EAV): SELECT * FROM events WHERE id = 2 AND table = 'user' ORDER BY date DESC(); 6 bytes overhead per event (ID, TABLE and EVENT). I'm more inclined to go with this approach since the pros seem to far outweigh the cons, but I'm still a little bit reluctant.. Am I missing something? What are your thoughts on this? Thanks!

    Read the article

  • In MySQL, what is the most effective query design for joining large tables with many to many relatio

    - by lighthouse65
    In our application, we collect data on automotive engine performance -- basically source data on engine performance based on the engine type, the vehicle running it and the engine design. Currently, the basis for new row inserts is an engine on-off period; we monitor performance variables based on a change in engine state from active to inactive and vice versa. The related engineState table looks like this: +---------+-----------+---------------+---------------------+---------------------+-----------------+ | vehicle | engine | engine_state | state_start_time | state_end_time | engine_variable | +---------+-----------+---------------+---------------------+---------------------+-----------------+ | 080025 | E01 | active | 2008-01-24 16:19:15 | 2008-01-24 16:24:45 | 720 | | 080028 | E02 | inactive | 2008-01-24 16:19:25 | 2008-01-24 16:22:17 | 304 | +---------+-----------+---------------+---------------------+---------------------+-----------------+ For a specific analysis, we would like to analyze table content based on a row granularity of minutes, rather than the current basis of active / inactive engine state. For this, we are thinking of creating a simple productionMinute table with a row for each minute in the period we are analyzing and joining the productionMinute and engineEvent tables on the date-time columns in each table. So if our period of analysis is from 2009-12-01 to 2010-02-28, we would create a new table with 129,600 rows, one for each minute of each day for that three-month period. The first few rows of the productionMinute table: +---------------------+ | production_minute | +---------------------+ | 2009-12-01 00:00 | | 2009-12-01 00:01 | | 2009-12-01 00:02 | | 2009-12-01 00:03 | +---------------------+ The join between the tables would be engineState AS es LEFT JOIN productionMinute AS pm ON es.state_start_time <= pm.production_minute AND pm.production_minute <= es.event_end_time. This join, however, brings up multiple environmental issues: The engineState table has 5 million rows and the productionMinute table has 130,000 rows When an engineState row spans more than one minute (i.e. the difference between es.state_start_time and es.state_end_time is greater than one minute), as is the case in the example above, there are multiple productionMinute table rows that join to a single engineState table row When there is more than one engine in operation during any given minute, also as per the example above, multiple engineState table rows join to a single productionMinute row In testing our logic and using only a small table extract (one day rather than 3 months, for the productionMinute table) the query takes over an hour to generate. In researching this item in order to improve performance so that it would be feasible to query three months of data, our thoughts were to create a temporary table from the engineEvent one, eliminating any table data that is not critical for the analysis, and joining the temporary table to the productionMinute table. We are also planning on experimenting with different joins -- specifically an inner join -- to see if that would improve performance. What is the best query design for joining tables with the many:many relationship between the join predicates as outlined above? What is the best join type (left / right, inner)?

    Read the article

  • Right code to retrieve data from sql server database

    - by HasanGursoy
    Hi, I have some problems in database connection and wonder if I have something wrong in my code. Please review. This question is related: Switch between databases, use two databases simultaneously question. cs="Data Source=mywebsite.com;Initial Catalog=database;User Id=root;Password=toor;Connect Timeout=10;Pooling='true';" using (SqlConnection cnn = new SqlConnection(WebConfigurationManager.ConnectionStrings["cs"].ConnectionString)) { using (SqlCommand cmmnd = new SqlCommand("", cnn)) { try { cnn.Open(); #region Header & Description cmmnd.Parameters.Add("@CatID", SqlDbType.Int).Value = catId; cmmnd.CommandText = "SELECT UpperID, Title, Description FROM Categories WHERE CatID=@CatID;"; string mainCat = String.Empty, rootCat = String.Empty; using (SqlDataReader rdr = cmmnd.ExecuteReader()) { if (rdr.Read()) { mainCat = rdr["Title"].ToString(); upperId = Convert.ToInt32(rdr["UpperID"]); description = rdr["Title"]; } else { Response.Redirect("/", false); } } if (upperId > 0) //If upper category exists add its name { cmmnd.Parameters["@CatID"].Value = upperId; cmmnd.CommandText = "SELECT Title FROM Categories WHERE CatID=@CatID;"; using (SqlDataReader rdr = cmmnd.ExecuteReader()) { if (rdr.Read()) { rootCat = "<a href='x.aspx'>" + rdr["Title"] + "</a> &raquo; "; } } } #endregion #region Sub-Categories if (upperId == 0) //show only at root categories { cmmnd.Parameters["@CatID"].Value = catId; cmmnd.CommandText = "SELECT Count(CatID) FROM Categories WHERE UpperID=@CatID;"; if (Convert.ToInt32(cmmnd.ExecuteScalar()) > 0) { cmmnd.CommandText = "SELECT CatID, Title FROM Categories WHERE UpperID=@CatID ORDER BY Title;"; using (SqlDataReader rdr = cmmnd.ExecuteReader()) { while (rdr.Read()) { subcat.InnerHtml += "<a href='x.aspx'>" + rdr["Title"].ToString().ToLower() + "</a>\n"; description += rdr["Title"] + ", "; } } } } #endregion } catch (Exception ex) { HasanG.LogException(ex, Request.RawUrl, HttpContext.Current); Response.Redirect("/", false); } finally { cnn.Close(); } } } The random errors I'm receiving are: A transport-level error has occurred when sending the request to the server. (provider: TCP Provider, error: 0 - An existing connection was forcibly closed by the remote host.) A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: Named Pipes Provider, error: 40 - Could not open a connection to SQL Server) Timeout expired. The timeout period elapsed prior to obtaining a connection from the pool. This may have occurred because all pooled connections were in use and max pool size was reached. Cannot open database "db" requested by the login. The login failed. Login failed for user 'root'.

    Read the article

  • (SQL) Selecting from a database based on multiple pairs of pairs

    - by Owen Allen
    The problem i've encountered is attempting to select rows from a database where 2 columns in that row align to specific pairs of data. IE selecting rows from data where id = 1 AND type = 'news'. Obviously, if it was 1 simple pair it would be easy, but the issue is we are selecting rows based on 100s of pair of data. I feel as if there must be some way to do this query without looping through the pairs and querying each individually. I'm hoping some SQL stackers can provide guidance. Here's a full code break down: Lets imagine that I have the following dataset where history_id is the primary key. I simplified the structure a bit regarding the dates for ease of reading. table: history history_id id type user_id date 1 1 news 1 5/1 2 1 news 1 5/1 3 1 photo 1 5/2 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 If the user wants to select rows from the database based on a date range we would take a subset of that data. SELECT history_id, id, type, user_id, date FROM history WHERE date BETWEEN '5/3' AND '5/5' Which returns the following dataset history_id id type user_id date 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 Now, using that subset of data I need to determine how many of those entries represent the first entry in the database for each type,id pairing. IE is row 4 the first time in the database that id: 3, type: news appears. So I use a with() min() query. In real code the two lists are programmatically generated from the result sets of our previous query, here I spelled them out for ease of reading. WITH previous AS ( SELECT history_id, id, type FROM history WHERE id IN (1,2,3,4) AND type IN ('news','photo') ) SELECT min(history_id) as history_id, id, type FROM previous GROUP BY id, type Which returns the following data set. history_id id type user_id date 1 1 news 1 5/1 2 1 news 1 5/1 3 1 photo 1 5/2 4 3 news 1 5/3 5 4 news 1 5/3 6 1 news 1 5/4 7 2 photo 1 5/4 8 2 photo 1 5/5 You'll notice it's the entire original dataset, because we are matching id and type individually in lists, rather than as a collective pairs. The result I desire is, but I can't figure out the SQL to get this result. history_id id type user_id date 1 1 news 1 5/1 4 3 news 1 5/3 5 4 news 1 5/3 7 2 photo 1 5/4 Obviously, I could go the route of looping through each pair and querying the database to determine it's first result, but that seems an inefficient solution. I figured one of the SQL gurus on this site might be able to spread some wisdom. In case I'm approaching this situation incorrectly, the gist of the whole routine is that the database stores all creations and edits in the same table. I need to track each users behavior and determine how many entries in the history table are edits or creations over a specific date range. Therefore I select all type:id pairs from the date range based on a user_id, and then for each pairing I determine if the user is responsible for the first that occurs in the database. If first, then creation else edit. Any assistance would be awesome.

    Read the article

  • On asp:Table Control how do we create a thead ?

    - by balexandre
    from MSDN article on the subject we can see that we create a TableHeaderRowthat conatins TableHeaderCells. but they add the table header like this: myTable.Row.AddAt(0, headerRow); witch outputs the HTML: <table id="Table1" ... > <tr> <th scope="column" abbr="Col 1 Head">Column 1 Header</th> <th scope="column" abbr="Col 2 Head">Column 2 Header</th> <th scope="column" abbr="Col 3 Head">Column 3 Header</th> </tr> <tr> <td>(0,0)</td> <td>(0,1)</td> <td>(0,2)</td> </tr> ... and it should have <thead> and <tbody> (so it works seamless with tablesorter) :) <table id="Table1" ... > <thead> <tr> <th scope="column" abbr="Col 1 Head">Column 1 Header</th> <th scope="column" abbr="Col 2 Head">Column 2 Header</th> <th scope="column" abbr="Col 3 Head">Column 3 Header</th> </tr> </thead> <tbody> <tr> <td>(0,0)</td> <td>(0,1)</td> <td>(0,2)</td> </tr> ... </tbody> the HTML aspx code is <asp:Table ID="Table1" runat="server" /> How can I output the correct syntax? Just as information, the GridViewcontrol has this builed in as we just need to set teh Accesbility and use the HeaderRow gv.UseAccessibleHeader = true; gv.HeaderRow.TableSection = TableRowSection.TableHeader; gv.HeaderRow.CssClass = "myclass"; but the question is for the Table control.

    Read the article

  • Will TSQL become useless because of new ORMs? [closed]

    - by Saeed Neamati
    By introducing LINQ to SQL, I found myself and my .NET developer colleagues gradually moving from TSQL to C# to create queries on the database. Entity Framework made that shift almost permanent. Now it's nearly 2 years that I use LINQ to SQL and LINQ to Entities and haven't used TSQL that much. Yesterday, a colleague encountered a problem (he had to create a SP) and we went to help him. But we all found that our TSQL knowledge was diminished for sure, and a simple SP that seemed trivial to us 2 or 3 years ago, was a challenge to be solved yesterday. Thus it came to my mind that while TSQL's life is attached to SQL Server, and logically as long as SQL Server lives and doesn't change it's SQL language, TSQL would also live, practically it might die, and soon very few people might know it. Am I right? Do existence of ORMs like Entity Framework threaten TSQL's life and usability?

    Read the article

  • SQL Saturday #44 Huntington Beach Recap

    What a great day. It was long and tiring, but rewarding in so many ways. On Sunday morning, I was driving home and I decided to take the Pacific Coast Highway from Huntington Beach.  It was a great chance to exhale and just enjoy the sun and smells of the beach (I really love SoCal sometimes). And for future reference for all you speakers, the beach and ocean are only 5 minutes from the SQL Saturday location.  I just could help noticing also the shocking number of high priced cars on the road (4 Bentleys, 3 Ferraris, 1 Aston Martins, 3 Maserati, 1 Rolls Royce, and 2 Lamborghinis).  It made me think about this: Price of all those cars: $ 150,000+.  Impacting the ability of people to learn: Priceless.  We have positively impacted the education, knowledge, capabilities of not only our attendees, but also all of their companies and people they might help as well.  That is just staggering and something to be immensely proud of. To all of my fellow community leaders, I salute you. So lets talk about the event Overall We had over 220 people register for the event and had 180+ people attend the event. I was shooting for the magical 200 number, but I guess it just gives us more motivation to make it even bigger and better next time. We had a few snags along the way, but what event doesnt, but I think everything turned out great. I did not hear any negative comments and heard lots of positive comments along with people asking when the next one is going to be (More on that later). Location- Golden West College We could not have asked for a better partner for the event. Herb Cohen from Golden West College was the wizard behind the curtains. From the beginning, he was our advocate to the GWC Board and was instrumental in getting our event approved. The day off, Herb was a HUGE help getting any and all logistics that we needed taken care of. In the craziness of the early morning registration crush it was a big help knowing that he and Bret Stateham (Blog | Twitter) were taking care of testing projectors in all the rooms. Anything we needed he was there and was even proactive in getting some things that I had not even thought of (i.e. a dumpster for all of our garbage). I cannot thank Herb enough along with other members of the GWC staff including Minnie Higgins of the Career and Technical Education Division office, Jack Taylor, public safety, and Ron Pryor, Tech Services Support. And last, but not least, the Wireless on campus was absolutely FANTASTIC! Some lessons learned Unless you are a glutton for punishment, as I no doubt am, you most certainly want to give yourself more than six weeks to plan the event. I am lucky that I have a very understanding wife and had a wonderful set of co-coordinators helping me out. A big thanks goes out to Phil, Marlon (Blog | Twitter), Nitin (Twitter), Thomas (Blog | Twitter), Bret (Blog | Twitter), Ben, and Laurie. Thankfully, the sponsor and speaker community was hugely supportive and we were able to fill out the entire event with speakers and sponsors. I have to say that there is not a lot that I would change after this years event. There are obviously going to be some things that we can do better or differently next time, but overall I think it was a great event and I was more than happy with the response we received from the community. Sponsors We obviously could not have put together our event without our sponsors. So certainly have to show them some love. Platinum Sponsors Quest Software http://www.quest.com My Space http://www.myspace.com/ Gold Strategy Companion http://www.strategycompanion.com Silver Fusion-IO http://www.fusionio.com Bronze WestClinTech http://westclintech.com Professional Association For SQL Server http://www.sqlpass.org Attunity http://www.attunity.com Sharepoint 360 http://www.sharepoint360.com Some additional Thanks Andy Warren (Blog | Twitter) Always there to answer my question and help out when I had some issues or questions with the website. The amount of work that he and everyone else put into SQL Saturday is very amazing. What a great gift to the community! Einstein Bros. Bagels They were our Breakfast Vendor and arrived perfectly on time with yummy bagels, sweets and most importantly coffee. Luccis Deli (http://www.luccisdeli.com) Luccis was out Lunch Vendor. They were great to work with and the food was excellent. They worked with us to give us a great price. Heard lots of great comments about the lunches. Definitely not your ordinary box lunch. Moving Forward Unfortunately, the work does not end after the event. We have a few things to clear up such as surveys, sponsor stuff, presentations uploaded to the website, expense reimbursement, stuff like that. Hopefully, all that should be cleared up within the next couple weeks. After that as a group we are going to get together and decide what our next steps are. We definitely want to keep some of the momentum that we are building as a SQL Community and channel that into future SQL Saturdays and other types of community events. In the meantime, for additional training be sure to check out your local User Group and PASS. San Diego SQL Server Users Group ( http://www.sdsqlug.org/home/index.cfm ) Orange County SQL Server Users Group ( http://www.sqloc.com/ ) L.A. SQL Server Users Group ( http://www.sql.la/ ) SQL PASS ( http://www.sqlpass.org/ ) 24 Hours of PASS ( http://www.sqlpass.org/24hours/2010/ ) So stay tuned, there will be more events to come in SoCal!!Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL 2005 Transaction Rollback Hung–unresolved deadlock

    - by steveh99999
    Encountered an interesting issue recently with a SQL 2005 sp3 Enterprise Edition system. Every weekend, a full database reindex was being run on this system – normally this took around one and a half hours. Then, one weekend, the job ran for over 17 hours  - and had yet to complete... At this point, DBA cancelled the job. Job status is now cancelled – issue over…   However, cancelling the job had not killed the reindex transaction – DBCC OPENTRAN was still showing the transaction being open. The oldest open transaction in the database was now over 17 hours old.  Consequently, transaction log % used growing dramatically and locks still being held in the database... Further attempts to kill the transaction did nothing. ie we had a transaction which could not be killed. In sysprocesses, it was apparent the SPID was in rollback status, but the spid was not accumulating CPU or IO. Was the SPID stuck ? On examination of the SQL errorlog – shortly after the reindex had started, a whole bunch of deadlock output had been produced by trace flag 1222. Then this :- spid5s      ***Stack Dump being sent to   xxxxxxx\SQLDump0042.txt spid5s      * ******************************************************************************* spid5s      * spid5s      * BEGIN STACK DUMP: spid5s      *   12/05/10 01:04:47 spid 5 spid5s      * spid5s      * Unresolved deadlock spid5s      * spid5s      *   spid5s      * ******************************************************************************* spid5s      * ------------------------------------------------------------------------------- spid5s      * Short Stack Dump spid5s      Stack Signature for the dump is 0x000001D7 spid5s      External dump process return code 0x20000001. Unresolved deadlock – don’t think I’ve ever seen one of these before…. A quick call to Microsoft support confirmed the following bug had been hit :- http://support.microsoft.com/kb/961479 So, only option to get rid of the hung spid – to restart SQL Server… Fortunately SQL Server restarted without any issues. I was pleasantly surprised to see that recovery on this particular database was fast. However, restarting SQL Server to fix an issue is not something I would normally rush to do... Short term fix – the reindex was changed to use MAXDOP of 1. Longer term fix will be to apply the correct CU, or wait for SQL 2005 sp 4 ?? This should be released any day soon I hope..

    Read the article

  • ODI 11g – Oracle Multi Table Insert

    - by David Allan
    With the IKM Oracle Multi Table Insert you can generate Oracle specific DML for inserting into multiple target tables from a single query result – without reprocessing the query or staging its result. When designing this to exploit the IKM you must split the problem into the reusable parts – the select part goes in one interface (I named SELECT_PART), then each target goes in a separate interface (INSERT_SPECIAL and INSERT_REGULAR). So for my statement below… /*INSERT_SPECIAL interface */ insert  all when 1=1 And (INCOME_LEVEL > 250000) then into SCOTT.CUSTOMERS_NEW (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /* INSERT_REGULAR interface */ when 1=1  then into SCOTT.CUSTOMERS_SPECIAL (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) values (ID, NAME, GENDER, BIRTH_DATE, MARITAL_STATUS, INCOME_LEVEL, CREDIT_LIMIT, EMAIL, USER_CREATED, DATE_CREATED, USER_MODIFIED, DATE_MODIFIED) /*SELECT*PART interface */ select        CUSTOMERS.EMAIL EMAIL,     CUSTOMERS.CREDIT_LIMIT CREDIT_LIMIT,     UPPER(CUSTOMERS.NAME) NAME,     CUSTOMERS.USER_MODIFIED USER_MODIFIED,     CUSTOMERS.DATE_MODIFIED DATE_MODIFIED,     CUSTOMERS.BIRTH_DATE BIRTH_DATE,     CUSTOMERS.MARITAL_STATUS MARITAL_STATUS,     CUSTOMERS.ID ID,     CUSTOMERS.USER_CREATED USER_CREATED,     CUSTOMERS.GENDER GENDER,     CUSTOMERS.DATE_CREATED DATE_CREATED,     CUSTOMERS.INCOME_LEVEL INCOME_LEVEL from    SCOTT.CUSTOMERS   CUSTOMERS where    (1=1) Firstly I create a SELECT_PART temporary interface for the query to be reused and in the IKM assignment I state that it is defining the query, it is not a target and it should not be executed. Then in my INSERT_SPECIAL interface loading a target with a filter, I set define query to false, then set true for the target table and execute to false. This interface uses the SELECT_PART query definition interface as a source. Finally in my final interface loading another target I set define query to false again, set target table to true and execute to true – this is the go run it indicator! To coordinate the statement construction you will need to create a package with the select and insert statements. With 11g you can now execute the package in simulation mode and preview the generated code including the SQL statements. Hopefully this helps shed some light on how you can leverage the Oracle MTI statement. A similar IKM exists for Teradata. The ODI IKM Teradata Multi Statement supports this multi statement request in 11g, here is an extract from the paper at www.teradata.com/white-papers/born-to-be-parallel-eb3053/ Teradata Database offers an SQL extension called a Multi-Statement Request that allows several distinct SQL statements to be bundled together and sent to the optimizer as if they were one. Teradata Database will attempt to execute these SQL statements in parallel. When this feature is used, any sub-expressions that the different SQL statements have in common will be executed once, and the results shared among them. It works in the same way as the ODI MTI IKM, multiple interfaces orchestrated in a package, each interface contributes some SQL, the last interface in the chain executes the multi statement.

    Read the article

< Previous Page | 323 324 325 326 327 328 329 330 331 332 333 334  | Next Page >