Search Results

Search found 47861 results on 1915 pages for 'sql update'.

Page 251/1915 | < Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >

  • Why Do I See the "In Recovery" Msg, and How Can I Prevent it?

    - by John Hansen
    The project I'm working on creates a local copy of the SQL Server database for each SVN branch you work on. We're running SQL Server 2008 Express with Advanced Services on our local machine to host it. When we create a new branch, the build script will create a new database with the ID of that branch, creates the schema objects, and copies over a selection of data from the production shadow server. After the database is created, it, or other databases on the local machine, will often go into "In Recovery" mode for several minutes. After several refreshes it comes up and is happy, but will occasionally go back into "In Recovery" mode. The database is created in simple recovery mode. The file names aren't specified, so it uses default paths for files. The size of the database after loading data is ~400 megs. It is running in SQL Server 2005 compatibility mode. The command that creates the database is: sqlcmd -S $(DBServer) -Q "IF NOT EXISTS (SELECT [name] FROM sysdatabases WHERE [name] = '$(DBName)') BEGIN CREATE DATABASE [$(DBName)]; print 'Created $(DBName)'; END" ...where $(DBName) and $(DBServer) are MSBuild parameters. I got a nice clean log file this morning. When I turned on my computer it starts all five databases. However, two of them show transactions being rolled forward and backwards. The it just keeps trying to start up all five of the databases. 2010-06-10 08:24:59.74 spid52 Starting up database 'ASPState'. 2010-06-10 08:24:59.82 spid52 Starting up database 'CommunityLibrary'. 2010-06-10 08:25:03.97 spid52 Starting up database 'DLG-R8441'. 2010-06-10 08:25:05.07 spid52 2 transactions rolled forward in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 0 transactions rolled back in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:05.14 spid52 Recovery is writing a checkpoint in database 'DLG-R8441' (6). This is an informational message only. No user action is required. 2010-06-10 08:25:11.23 spid52 Starting up database 'DLG-R8979'. 2010-06-10 08:25:12.31 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:13.17 spid52 2 transactions rolled forward in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 0 transactions rolled back in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:13.22 spid52 Recovery is writing a checkpoint in database 'DLG-R8979' (9). This is an informational message only. No user action is required. 2010-06-10 08:25:18.43 spid52 Starting up database 'Rls QA'. 2010-06-10 08:25:19.13 spid46s Starting up database 'DLG-R8979'. 2010-06-10 08:25:23.29 spid36s Starting up database 'DLG-R8441'. 2010-06-10 08:25:27.91 spid52 Starting up database 'ASPState'. 2010-06-10 08:25:29.80 spid41s Starting up database 'DLG-R8979'. 2010-06-10 08:25:31.22 spid52 Starting up database 'Rls QA'. In this case it kept trying to start the databases continuously until I shut down SQL Server at 08:48:19.72, 23 minutes later. Meanwhile, I actually am able to use the databases much of the time.

    Read the article

  • How to update a table using a select group by in a second one and itself as the data source in MySQL

    - by Jader Dias
    I can do this: SELECT t2.value + sum(t3.value) FROM tableA t2, tableB t3 WHERE t2.somekey = t3.somekey GROUP BY t3.somekey But how to do this? UPDATE tableA t1 SET speed = ( SELECT t2.value + sum(t3.value) FROM tableA t2, tableB t3 WHERE t2.somekey = t3.somekey AND t1.somekey = t3.somekey GROUP BY t3.somekey ) ; MySQL says it's illegal since you can't specify target table t1 for update in FROM clause.

    Read the article

  • How to track downloads from an Eclipse update site?

    - by Uri
    I have a plug-in that I am distributing via an Eclipse update site. I want to track how many times it is being downloaded, and preferably by whom. For regular pages on my site, I can use Google analytics. However, Eclipse doesn't use any HTMLs when going for update sites. Is there any way to do this when I don't have access to the hosting apache?

    Read the article

  • how to rollback/undo yum update on fedora after messing the fedora versions

    - by misteryes
    I want to install texlive on my fedora 16 laptop with the following procedure: # yum remove tex-* texlive-* # cat > /etc/yum.repos.d/texlive.repo <<EOF [texlive] name=texlive baseurl=http://jnovy.fedorapeople.org/texlive/2012/packages.f17/ enabled=1 gpgcheck=0 EOF # yum update; # yum install texlive after yum update, I notice that my laptop is fedora 16, while I used 2012/packages.fc17/ so I modify /etc/yum.repos.d/texlive.repo to use 2011/packages.fc16 and do yum update again however, there are many errors [root@kitty esolve]# yum update Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found : http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml Trying other mirror. Setting up Update Process Resolving Dependencies --> Running transaction check ---> Package dvipng.x86_64 0:1.14-1.fc15 will be obsoleted ---> Package kpathsea.x86_64 0:2007-66.fc16 will be obsoleted --> Processing Dependency: libkpathsea.so.4()(64bit) for package: evince-dvi-3.2.1-2.fc16.x86_64 ---> Package mkvtoolnix.x86_64 0:5.8.0-1 will be updated ---> Package mkvtoolnix.x86_64 0:6.3.0-1 will be an update ---> Package nautilus-dropbox.x86_64 0:1.4.0-1.fc10 will be updated ---> Package nautilus-dropbox.x86_64 0:1.6.0-1.fc10 will be an update ---> Package texlive-dvipng-bin.x86_64 2:svn26509.0-19.20130317_r29408.fc17 will be obsoleting --> Processing Dependency: texlive-kpathsea-lib = 2:2012-19.20130317_r29408.fc17 for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: texlive-base for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: tex-dvipng for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: libpng15.so.15()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 --> Processing Dependency: libkpathsea.so.6()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 ---> Package texlive-kpathsea.noarch 2:svn28792.0-19.fc17 will be obsoleting --> Processing Dependency: texlive-kpathsea-bin for package: 2:texlive-kpathsea-svn28792.0-19.fc17.noarch --> Running transaction check ---> Package kpathsea.x86_64 0:2007-66.fc16 will be obsoleted --> Processing Dependency: libkpathsea.so.4()(64bit) for package: evince-dvi-3.2.1-2.fc16.x86_64 ---> Package texlive-base.noarch 2:2012-19.20130317_r29408.fc17 will be installed ---> Package texlive-dvipng.noarch 2:svn26689.1.14-19.fc17 will be installed ---> Package texlive-dvipng-bin.x86_64 2:svn26509.0-19.20130317_r29408.fc17 will be obsoleting --> Processing Dependency: libpng15.so.15()(64bit) for package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 ---> Package texlive-kpathsea-bin.x86_64 2:svn27347.0-19.20130317_r29408.fc17 will be installed ---> Package texlive-kpathsea-lib.x86_64 2:2012-19.20130317_r29408.fc17 will be installed --> Finished Dependency Resolution Error: Package: evince-dvi-3.2.1-2.fc16.x86_64 (@fedora) Requires: libkpathsea.so.4()(64bit) Removing: kpathsea-2007-66.fc16.x86_64 (@so-updates) libkpathsea.so.4()(64bit) Obsoleted By: 2:texlive-kpathsea-svn28792.0-19.fc17.noarch (texlive) Not found Error: Package: 2:texlive-dvipng-bin-svn26509.0-19.20130317_r29408.fc17.x86_64 (texlive) Requires: libpng15.so.15()(64bit) You could try using --skip-broken to work around the problem You could try running: rpm -Va --nofiles --nodigest and when I do yum install texlive, it simply tries to install the f17 version, which failed. what Can I do to install f16 version? how can I undo yum update with 2012/packages.f17/ I tried yum history, and for today's history, I only have Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit ID | Login user | Date and time | Action(s) | Altered ------------------------------------------------------------------------------- 124 | esolve ... <esolve> | 2013-09-12 18:35 | Erase | 24 123 | root <root> | 2013-08-23 11:08 | Update | 1 122 | root <root> | 2013-08-21 14:13 | Update | 1 < 121 | esolve ... <esolve> | 2013-05-31 15:36 | Install | 1 > 120 | root <root> | 2013-05-29 15:13 | Install | 1 < 119 | root <root> | 2013-04-18 13:13 | Update | 1 >< which seems not related to yum update the history results: 1003 yum update 1004 vim 1005 vim /etc/yum.repos.d/texlive.repo 1006 yum update 1007 yum install texlive 1008 vim /etc/yum.repos.d/texlive.repo 1009 clear 1010 yum history 1011 yum history list 1012 vim 1013 vim /etc/yum.repos.d/texlive.repo 1014 yum history list 1015 history also I tried yum history undo 124 but it failed! [root@kitty esolve]# yum history undo 124 Loaded plugins: auto-update-debuginfo, langpacks, presto, refresh-packagekit http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found : http://repos.fedorapeople.org/repos/leigh123linux/cinnamon/fedora-16/x86_64/repodata/repomd.xml Trying other mirror. Undoing transaction 124, from Thu Sep 12 18:35:31 2013 Erase R-2.14.1-1.fc16.x86_64 ? Erase R-core-2.14.1-1.fc16.x86_64 ? Erase R-devel-2.14.1-1.fc16.x86_64 ? Erase a2ps-4.14-12.fc15.x86_64 ? Erase docbook-utils-pdf-0.6.14-29.fc16.noarch ? Erase html2ps-1.0-0.7.b7.fc15.noarch ? Erase jadetex-3.13-10.fc15.noarch ? Erase kile-2.1.1-1.fc16.x86_64 ? Erase linuxdoc-tools-0.9.66-9.fc15.x86_64 ? Erase tetex-dvipost-1.1-12.fc15.x86_64 ? Erase tex-cm-lgc-0.5-18.fc15.noarch ? Erase tex-preview-11.86-6.fc16.noarch ? Erase texinfo-tex-4.13a-15.fc15.x86_64 ? Erase texlive-2007-66.fc16.x86_64 ? Erase texlive-dvips-2007-66.fc16.x86_64 ? Erase texlive-latex-2007-66.fc16.x86_64 ? Erase texlive-texmf-2007-40.fc16.noarch ? Erase texlive-texmf-dvips-2007-40.fc16.noarch ? Erase texlive-texmf-fonts-2007-40.fc16.noarch ? Erase texlive-texmf-latex-2007-40.fc16.noarch ? Erase texlive-utils-2007-66.fc16.x86_64 ? Erase texmaker-1:3.2.2-1.fc16.x86_64 ? Erase texmf-RR-Inria-4.11-inria.0.noarch ? Erase xdvik-22.84.14-9.fc15.x86_64 ? Error: No package(s) available to install

    Read the article

  • vb.net : is it possible to connect to sql server 2008 via odbc but not through vb.net code?

    - by phill
    I'm supporting an old vb.net program whose database it connected to was moved from SQL Server 2005 to SQL Server 2008. Is there a setting on SQL Server 2008 which will allow ODBC connections to access the database but not allow VB.NET to connect to it programmatically? the error i keep receiving in the app is: An error has occurred while establishing a connection to the server. When connecting to SQL Server 2005, this failure may be caused by the fact that under the default settings SQL Server does not allow remote connections. (provider: Named Pipes Provider, error: 40 – Could not open a connection to SQL Server) however I can connect to it when I create a system dsn to the sql server instance and through VS2005's Tools Connect to Database. Here is the code I'm using to connect: dim strC as string strC = "data source=bob; database=subscribers; user id=bobuser; password=passme" dim connection as New SqlClient.SqlConnection(strC) try connection.open() catch ex as Exception msgbox(ex.message) end try connection.Close()

    Read the article

  • SQL Server 2005 to 2008 Bak file help please!

    - by Brandon
    I have a SQl Server 2005 database backup that I want to transfer to SQL Server 2008 on my server. I spent 3 days transferring the .bak file from my own machine to my server. I then tried to restore the bak file and I got an error. I then read online a completely different method for adding a SQL server 2005 Database to SQL server 2008 which was the detach and attach method which means I need to detach the database in SQL Server 2005 and then transfer the MDF file from it via ftp to my server and then attach it in SQL Server 2008. Well I already used a lot of bandwidth transferring the .bak file to my server. is there a way to convert my .bak file which is already on my server to an MDF file and attach it in SQL server 2008?

    Read the article

  • Access 2007 VBA & SQL - Update a Subform pointed at a dynamically created query

    - by Lucretius
    Abstract: I'm using VB to recreate a query each time a user selects one of 3 options from a drop down menu, which appends the WHERE clause If they've selected anything from the combo boxes. I then am attempting to get the information displayed on the form to refresh thereby filtering what is displayed in the table based on user input. 1) Dynamically created query using VB. Private Sub BuildQuery() ' This sub routine will redefine the subQryAllJobsQuery based on input from ' the user on the Management tab. Dim strQryName As String Dim strSql As String ' Main SQL SELECT statement Dim strWhere As String ' Optional WHERE clause Dim qryDef As DAO.QueryDef Dim dbs As DAO.Database strQryName = "qryAllOpenJobs" strSql = "SELECT * FROM tblOpenJobs" Set dbs = CurrentDb ' In case the query already exists we should deleted it ' so that we can rebuild it. The ObjectExists() function ' calls a public function in GlobalVariables module. If ObjectExists("Query", strQryName) Then DoCmd.DeleteObject acQuery, strQryName End If ' Check to see if anything was selected from the Shift ' Drop down menu. If so, begin the where clause. If Not IsNull(Me.cboShift.Value) Then strWhere = "WHERE tblOpenJobs.[Shift] = '" & Me.cboShift.Value & "'" End If ' Check to see if anything was selected from the Department ' drop down menu. If so, append or begin the where clause. If Not IsNull(Me.cboDepartment.Value) Then If IsNull(strWhere) Then strWhere = strWhere & " AND tblOpenJobs.[Department] = '" & Me.cboDepartment.Value & "'" Else strWhere = "WHERE tblOpenJobs.[Department] = '" & Me.cboDepartment.Value & "'" End If End If ' Check to see if anything was selected from the Date ' field. If so, append or begin the Where clause. If Not IsNull(Me.txtDate.Value) Then If Not IsNull(strWhere) Then strWhere = strWhere & " AND tblOpenJobs.[Date] = '" & Me.txtDate.Value & "'" Else strWhere = "WHERE tblOpenJobs.[Date] = '" & Me.txtDate.Value & "'" End If End If ' Concatenate the Select and the Where clause together ' unless all three parameters are null, in which case return ' just the plain select statement. If IsNull(Me.cboShift.Value) And IsNull(Me.cboDepartment.Value) And IsNull(Me.txtDate.Value) Then Set qryDef = dbs.CreateQueryDef(strQryName, strSql) Else strSql = strSql & " " & strWhere Set qryDef = dbs.CreateQueryDef(strQryName, strSql) End If End Sub 2) Main Form where the user selects items from combo boxes. picture of the main form and sub form http://i48.tinypic.com/25pjw2a.png 3) Subform pointed at the query created in step 1. Chain of events: 1) User selects item from drop down list on the main form. 2) Old query is deleted, new query is generated (same name). 3) Subform pointed at query does not update, but if you open the query by itself the correct results are displayed. Name of the Query: qryAllOpenJobs name of the subform: subQryAllOpenJobs Also, the Row Source of subQryAllOpenJobs = qryAllOpenJobs Name of the main form: frmManagement

    Read the article

  • Sensible Way to Pass Web Data in XML to a SQL Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Sensible Way to Pass Web Data to Sql Server Database

    - by Emtucifor
    After exploring several different ways to pass web data to a database for update purposes, I'm wondering if XML might be a good strategy. The database is currently SQL 2000. In a few months it will move to SQL 2005 and I will be able to change things if needed, but I need a SQL 2000 solution now. First of all, the database in question uses the EAV model. I know that this kind of database is generally highly frowned on, so for the purposes of this question, please just accept that this is not going to change. The current update method has the web server inserting values (that have all been converted first to their correct underlying types, then to sql_variant) to a temp table. A stored procedure is then run which expects the temp table to exist and it takes care of updating, inserting, or deleting things as needed. So far, only a single element has needed to be updated at a time. But now, there is a requirement to be able to edit multiple elements at once, and also to support hierarchical elements, each of which can have its own list of attributes. Here's some example XML I hand-typed to demonstrate what I'm thinking of. Note that in this database the Entity is Element and an ID of 0 signifies "create" aka an insert of a new item. <Elements> <Element ID="1234"> <Attr ID="221">Value</Attr> <Attr ID="225">287</Attr> <Attr ID="234"> <Element ID="99825"> <Attr ID="7">Value1</Attr> <Attr ID="8">Value2</Attr> <Attr ID="9" Action="delete" /> </Element> <Element ID="99826" Action="delete" /> <Element ID="0" Type="24"> <Attr ID="7">Value4</Attr> <Attr ID="8">Value5</Attr> <Attr ID="9">Value6</Attr> </Element> <Element ID="0" Type="24"> <Attr ID="7">Value7</Attr> <Attr ID="8">Value8</Attr> <Attr ID="9">Value9</Attr> </Element> </Attr> <Rel ID="3827" Action="delete" /> <Rel ID="2284" Role="parent"> <Element ID="3827" /> <Element ID="3829" /> <Attr ID="665">1</Attr> </Rel> <Rel ID="0" Type="23" Role="child"> <Element ID="3830" /> <Attr ID="67" </Rel> </Element> <Element ID="0" Type="87"> <Attr ID="221">Value</Attr> <Attr ID="225">569</Attr> <Attr ID="234"> <Element ID="0" Type="24"> <Attr ID="7">Value10</Attr> <Attr ID="8">Value11</Attr> <Attr ID="9">Value12</Attr> </Element> </Attr> </Element> <Element ID="1235" Action="delete" /> </Elements> Some Attributes are straight value types, such as AttrID 221. But AttrID 234 is a special "multi-value" type that can have a list of elements underneath it, and each one can have one or more values. Types only need to be presented when a new item is created, since the ElementID fully implies the type if it already exists. I'll probably support only passing in changed items (as detected by javascript). And there may be an Action="Delete" on Attr elements as well, since NULLs are treated as "unselected"--sometimes it's very important to know if a Yes/No question has intentionally been answered No or if no one's bothered to say Yes yet. There is also a different kind of data, a Relationship. At this time, those are updated through individual AJAX calls as things are edited in the UI, but I'd like to include those so that changes to relationships can be canceled (right now, once you change it, it's done). So those are really elements, too, but they are called Rel instead of Element. Relationships are implemented as ElementID1 and ElementID2, so the RelID 2284 in the XML above is in the database as: ElementID 2284 ElementID1 1234 ElementID2 3827 Having multiple children in one relationship isn't currently supported, but it would be nice later. Does this strategy and the example XML make sense? Is there a more sensible way? I'm just looking for some broad critique to help save me from going down a bad path. Any aspect that you'd like to comment on would be helpful. The web language happens to be Classic ASP, but that could change to ASP.Net at some point. A persistence engine like Linq or nHibernate is probably not acceptable right now--I just want to get this already working application enhanced without a huge amount of development time. I'll choose the answer that shows experience and has a balance of good warnings about what not to do, confirmations of what I'm planning to do, and recommendations about something else to do. I'll make it as objective as possible. P.S. I'd like to handle unicode characters as well as very long strings (10k +). UPDATE I have had this working for some time and I used the ADO Recordset Save-To-Stream trick to make creating the XML really easy. The result seems to be fairly fast, though if speed ever becomes a problem I may revisit this. In the meantime, my code works to handle any number of elements and attributes on the page at once, including updating, deleting, and creating new items all in one go. I settled on a scheme like so for all my elements: Existing data elements Example: input name e12345_a678 (element 12345, attribute 678), the input value is the value of the attribute. New elements Javascript copies a hidden template of the set of HTML elements needed for the type into the correct location on the page, increments a counter to get a new ID for this item, and prepends the number to the names of the form items. var newid = 0; function metadataAdd(reference, nameid, value) { var t = document.createElement('input'); t.setAttribute('name', nameid); t.setAttribute('id', nameid); t.setAttribute('type', 'hidden'); t.setAttribute('value', value); reference.appendChild(t); } function multiAdd(target, parentelementid, attrid, elementtypeid) { var proto = document.getElementById('a' + attrid + '_proto'); var instance = document.createElement('p'); target.parentNode.parentNode.insertBefore(instance, target.parentNode); var thisid = ++newid; instance.innerHTML = proto.innerHTML.replace(/{prefix}/g, 'n' + thisid + '_'); instance.id = 'n' + thisid; instance.className += ' new'; metadataAdd(instance, 'n' + thisid + '_p', parentelementid); metadataAdd(instance, 'n' + thisid + '_c', attrid); metadataAdd(instance, 'n' + thisid + '_t', elementtypeid); return false; } Example: Template input name _a678 becomes n1_a678 (a new element, the first one on the page, attribute 678). all attributes of this new element are tagged with the same prefix of n1. The next new item will be n2, and so on. Some hidden form inputs are created: n1_t, value is the elementtype of the element to be created n1_p, value is the parent id of the element (if it is a relationship) n1_c, value is the child id of the element (if it is a relationship) Deleting elements A hidden input is created in the form e12345_t with value set to 0. The existing controls displaying that attribute's values are disabled so they are not included in the form post. So "set type to 0" is treated as delete. With this scheme, every item on the page has a unique name and can be distinguished properly, and every action can be represented properly. When the form is posted, here's a sample of building one of the two recordsets used (classic ASP code): Set Data = Server.CreateObject("ADODB.Recordset") Data.Fields.Append "ElementID", adInteger, 4, adFldKeyColumn Data.Fields.Append "AttrID", adInteger, 4, adFldKeyColumn Data.Fields.Append "Value", adLongVarWChar, 2147483647, adFldIsNullable Or adFldMayBeNull Data.CursorLocation = adUseClient Data.CursorType = adOpenDynamic Data.Open This is the recordset for values, the other is for the elements themselves. I step through the posted form and for the element recordset use a Scripting.Dictionary populated with instances of a custom Class that has the properties I need, so that I can add the values piecemeal, since they don't always come in order. New elements are added as negative to distinguish them from regular elements (rather than requiring a separate column to indicate if it is new or addresses an existing element). I use regular expression to tear apart the form keys: "^(e|n)([0-9]{1,10})_(a|p|t|c)([0-9]{0,10})$" Then, adding an attribute looks like this. Data.AddNew ElementID.Value = DataID AttrID.Value = Integerize(Matches(0).SubMatches(3)) AttrValue.Value = Request.Form(Key) Data.Update ElementID, AttrID, and AttrValue are references to the fields of the recordset. This method is hugely faster than using Data.Fields("ElementID").Value each time. I loop through the Dictionary of element updates and ignore any that don't have all the proper information, adding the good ones to the recordset. Then I call my data-updating stored procedure like so: Set Cmd = Server.CreateObject("ADODB.Command") With Cmd Set .ActiveConnection = MyDBConn .CommandType = adCmdStoredProc .CommandText = "DataPost" .Prepared = False .Parameters.Append .CreateParameter("@ElementMetadata", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Element)) .Parameters.Append .CreateParameter("@ElementData", adLongVarWChar, adParamInput, 2147483647, XMLFromRecordset(Data)) End With Result.Open Cmd ' previously created recordset object with options set Here's the function that does the xml conversion: Private Function XMLFromRecordset(Recordset) Dim Stream Set Stream = Server.CreateObject("ADODB.Stream") Stream.Open Recordset.Save Stream, adPersistXML Stream.Position = 0 XMLFromRecordset = Stream.ReadText End Function Just in case the web page needs to know, the SP returns a recordset of any new elements, showing their page value and their created value (so I can see that n1 is now e12346 for example). Here are some key snippets from the stored procedure. Note this is SQL 2000 for now, though I'll be able to switch to 2005 soon: CREATE PROCEDURE [dbo].[DataPost] @ElementMetaData ntext, @ElementData ntext AS DECLARE @hdoc int --- snip --- EXEC sp_xml_preparedocument @hdoc OUTPUT, @ElementMetaData, '<xml xmlns:s="uuid:BDC6E3F0-6DA3-11d1-A2A3-00AA00C14882" xmlns:dt="uuid:C2F41010-65B3-11d1-A29F-00AA00C14882" xmlns:rs="urn:schemas-microsoft-com:rowset" xmlns:z="#RowsetSchema" />' INSERT #ElementMetadata (ElementID, ElementTypeID, ElementID1, ElementID2) SELECT * FROM OPENXML(@hdoc, '/xml/rs:data/rs:insert/z:row', 0) WITH ( ElementID int, ElementTypeID int, ElementID1 int, ElementID2 int ) ORDER BY ElementID -- orders negative items (new elements) first so they begin counting at 1 for later ID calculation EXEC sp_xml_removedocument @hdoc --- snip --- UPDATE E SET E.ElementTypeID = M.ElementTypeID FROM Element E INNER JOIN #ElementMetadata M ON E.ElementID = M.ElementID WHERE E.ElementID >= 1 AND M.ElementTypeID >= 1 The following query does the correlation of the negative new element ids to the newly inserted ones: UPDATE #ElementMetadata -- Correlate the new ElementIDs with the input rows SET NewElementID = Scope_Identity() - @@RowCount + DataID WHERE ElementID < 0 Other set-based queries do all the other work of validating that the attributes are allowed, are the correct data type, and inserting, updating, and deleting elements and attributes. I hope this brief run-down is useful to others some day! Converting ADO Recordsets to an XML stream was a huge winner for me as it saved all sorts of time and had a namespace and schema already defined that made the results come out correctly. Using a flatter XML format with 2 inputs was also much easier than sticking to some ideal about having everything in a single XML stream.

    Read the article

  • Slides and Links from SQL Azure session at BizSpark Azure Day in London

    - by Eric Nelson
    A big thanks to all who attended my two sessions on SQL Azure yesterday (29th March 2010). As promised, my slides and links from the session. SQL Azure Overview for Bizspark day View more presentations from Eric Nelson. Related Links: UK Azure Online Community – join today. UK Windows Azure Site Start working with Windows Azure SQL Azure maximum database size rises from 10GB to 50GB in June TCO and ROI calculator for Windows Azure SQL Azure Migration Wizard

    Read the article

  • Entity Framework 4.0: Optimal and horrible SQL

    - by DigiMortal
    Lately I had Entity Framework 4.0 session where I introduced new features of Entity Framework. During session I found out with audience how Entity Framework 4.0 can generate optimized SQL. After session I also showed guys one horrible example about how awful SQL can be generated by Entity Framework. In this posting I will cover both examples. Optimal SQL Before going to code take a look at following model. There is class called Event and I will use this class in my query. Here is the LINQ To Entities query that uses small anonymous type. var query = from e in _context.Events             select new { Id = e.Id, Title = e.Title }; Debug.WriteLine(((ObjectQuery)query).ToTraceString()); Running this code gives us the following SQL. SELECT      [Extent1].[event_id] AS [event_id],      [Extent1].[title] AS [title]  FROM [dbo].[events] AS [Extent1] This is really small – no additional fields in SELECT clause. Nice, isn’t it? Horrible SQL Ayende Rahien blog shows us darker side of Entiry Framework 4.0 queries. You can find comparison betwenn NHibernate, LINQ To SQL and LINQ To Entities from posting What happens behind the scenes: NHibernate, Linq to SQL, Entity Framework scenario analysis. In this posting I will show you the resulting query and let you think how much better it can be done. Well, it is not something we want to see running in our servers. I hope that EF team improves generated SQL to acceptable level before Visual Studio 2010 is released. There is also morale of this example: you should always check out the queries that O/R-mapper generates. Behind the curtains it may silently generate queries that perform badly and in this case you need to optimize you data querying strategy. Conclusion Entity Framework 4.0 is new product with a lot of new features and it is clear that not everything is 100% super in its first release. But it still great step forward and I hope that on 12.04.2010 we have new promising O/R-mapper available to use in our projects. If you want to read more about Entity Framework 4.0 and Visual Studio 2010 then please feel free to follow this link to list of my Visual Studio 2010 and .NET Framework 4.0 postings.

    Read the article

  • SQL Azure news at TechEd 2010

    More Azure news from TechEd US 2010.  This time, its from the SQL Azure team: 50GB databases available on June 28th Support for Spatial Data Data Sync Service for SQL Azure Microsoft SQL Server Web Manager Access 2010 Support for SQL Azure Read at about it here var addthis_pub="guybarrette";...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • SQL Azure news at TechEd 2010

    - by guybarrette
    More Azure news from TechEd US 2010.  This time, it’s from the SQL Azure team: 50GB databases available on June 28th Support for Spatial Data Data Sync Service for SQL Azure Microsoft SQL Server Web Manager Access 2010 Support for SQL Azure Read at about it here var addthis_pub="guybarrette";

    Read the article

  • APress Deal of the Day 2/June/2014 - Pro SQL Server 2012 Integration Services

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2014/06/02/apress-deal-of-the-day-2june2014---pro-sql-server.aspxToday’s $10 Deal of the Day from APress at http://www.apress.com/9781430236924 is Pro SQL Server 2012 Integration Services. “Pro SQL Server 2012 Integration Services is your key to building powerful extract, transform, load (ETL) solutions using SQL Server 2012 Integration Services (SSIS).”

    Read the article

  • Introducing Microsoft SQL Server 2008 R2

    - by CatherineRussell
    I was thrilled to find a free ebook: Introducing Microsoft SQL Server 2008 R2,by Ross Mistry and Stacia Misner The purpose in Introducing Microsoft SQL Server 2008 R2 is to point out both the new and the improved in the latest version of SQL Server. Great info and they are sharing it for free. Feel free to follow the authors and ask quesitons on Twitter. @RossMistry @StaciaMisner   Here is the link to the book: http://blogs.msdn.com/b/microsoft_press/archive/2010/04/14/free-ebook-introducing-microsoft-sql-server-2008-r2.aspx

    Read the article

  • APress Deal of the day 29/Jun/2013 - Pro SQL Database for Windows Azure

    - by TATWORTH
    Originally posted on: http://geekswithblogs.net/TATWORTH/archive/2013/06/29/apress-deal-of-the-day-29jun2013---pro-sql-database.aspxToday's $10 Deal of the day from APress at http://www.apress.com/9781430243953 is Pro SQL Database for Windows Azure"Pro SQL Database for Windows Azure, 2nd Edition introduces you to Microsoft's cloud-based delivery of its enterprise-caliber, SQL Server database management system—showing you how to program and administer it in a variety of cloud computing scenarios."

    Read the article

  • Accessing SQL Server data from iOS apps

    - by RobertChipperfield
    Almost all mobile apps need access to external data to be valuable. With a huge amount of existing business data residing in Microsoft SQL Server databases, and an ever-increasing drive to make more and more available to mobile users, how do you marry the rather separate worlds of Microsoft's SQL Server and Apple's iOS devices? The classic answer: write a web service layer Look at any of the questions on this topic asked in Internet discussion forums, and you'll inevitably see the answer, "just write a web service and use that!". But what does this process gain? For a well-designed database with a solid security model, and business logic in the database, writing a custom web service on top of this just to access some of the data from a different platform seems inefficient and unnecessary. Desktop applications interact with the SQL Server directly - why should mobile apps be any different? The better answer: the iSql SDK Working along the lines of "if you do something more than once, make it shared," we set about coming up with a better solution for the general case. And so the iSql SDK was born: sitting between SQL Server and your iOS apps, it provides the simple API you're used to if you've been developing desktop apps using the Microsoft SQL Native Client. It turns out a web service remained a sensible idea: HTTP is much more suited to the Big Bad Internet than SQL Server's native TDS protocol, removing the need for complex configuration, firewall configuration, and the like. However, rather than writing a web service for every app that needs data access, we made the web service generic, serving only as a proxy between the SQL Server and a client library integrated into the iPhone or iPad app. This client library handles all the network communication, and provides a clean API. OSQL in 25 lines of code As an example of how to use the API, I put together a very simple app that allowed the user to enter one or more SQL statements, and displayed the results in a rather primitively formatted text field. The total amount of Objective-C code responsible for doing the work? About 25 lines. You can see this in action in the demo video. Beta out now - your chance to give us your suggestions! We've released the iSql SDK as a beta on the MobileFoo website: you're welcome to download a copy, have a play in your own apps, and let us know what we've missed using the Feedback button on the site. Software development should be fun and rewarding: no-one wants to spend their time writing boiler-plate code over and over again, so stop writing the same web service code, and start doing exciting things in the new world of mobile data!

    Read the article

  • Read SQL Server Reporting Services Overview

    - by Editor
    Read an excellent, 14-page, general overview of Microsoft SQL Server 2008 Reporting Services entitled White Paper: Reporting Services in SQL Server 2008. Download the White Paper. (360 KB Microsoft Word file) White Paper: Reporting Services in SQL Server 2008 Microsoft SQL Server 2008 Reporting Services provides a complete server-based platform that is designed to support a wide variety [...]

    Read the article

  • Le "In-Memory" débarque dans SQL Server, Microsoft muscle Excel pour la BI et annonce le SP1 de SQL Server 2012 au Pass Summit 2012

    Pass Summit 2012 : Microsoft dote SQL Server d'une base de données In-Memory et annonce la disponibilité du SP 1 de SQL Server 2012 et SQL Server 2012 PDW Microsoft a profité de son salon Pass Summit 2012 dédié à SQL Server pour dévoiler sa vision d'une plateforme de gestion de données moderne. L'éditeur à travers plusieurs annonces a renouvelé son engagement d'aider ses clients à mieux exploiter leurs données, qu'elles soient structurées ou non structurées, où qu'elles se trouvent et quelle que soit leur taille. L'annonce phare qui a marqué l'ouverture du Pass Summit 2012 a été la présentation d'une nouvelle technologie de base de données transactionnelles in-Memory

    Read the article

  • Configure SQL Express 2005 for remote access

    Please follow the below steps as shown in pictures to configure SQL Server Express 2005 for remote access. Fig1: Open SQL Serve Configuration Manager Fig2: Navigate to SQL Serve 2005 N/W configuration and click on Protocols node Fig3: Enable TCP/IP Protocol Fig4: Enable Named Pipes Protocol Fig5: After enabling TCP/IP and Named Pipes protocols Fig6: Finally click on TCP/IP to configure the port number to listen N/W requests to SQL Express 2005. span.fullpost {display:none;}

    Read the article

  • APress Deal of the Day 20/Dec/2010 - Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008

    - by TATWORTH
    Todays $10 bargain PDF from Apress is: Beginning SQL Server Modeling: Model-Driven Application Development in SQL Server 2008 Get ready for model-driven application development with SQL Server Modeling! This book covers Microsoft's SQL Server Modeling (formerly known under the code name "Oslo") in detail and contains the information you need to be successful with designing and implementing workflow modeling. $49.99 | Published Jul 2010 |

    Read the article

  • CBO????????

    - by Liu Maclean(???)
    ???Itpub????????CBO??????????, ????????: SQL> create table maclean1 as select * from dba_objects; Table created. SQL> update maclean1 set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean1 on maclean1(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN1',cascade=>true); PL/SQL procedure successfully completed. SQL> explain plan for select * from maclean1 where status='INVALID'; Explained. SQL> set linesize 140 pagesize 1400 SQL> select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT --------------------------------------------------------------------------- Plan hash value: 987568083 ------------------------------------------------------------------------------ | Id | Operation | Name | Rows | Bytes | Cost (%CPU)| Time | ------------------------------------------------------------------------------ | 0 | SELECT STATEMENT | | 11320 | 1028K| 85 (0)| 00:00:02 | |* 1 | TABLE ACCESS FULL| MACLEAN1 | 11320 | 1028K| 85 (0)| 00:00:02 | ------------------------------------------------------------------------------ Predicate Information (identified by operation id): --------------------------------------------------- 1 - filter("STATUS"='INVALID') 13 rows selected. 10053 trace Access path analysis for MACLEAN1 *************************************** SINGLE TABLE ACCESS PATH   Single Table Cardinality Estimation for MACLEAN1[MACLEAN1]   Column (#10): STATUS(     AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.500000   Table: MACLEAN1  Alias: MACLEAN1     Card: Original: 22639.000000  Rounded: 11320  Computed: 11319.50  Non Adjusted: 11319.50   Access Path: TableScan     Cost:  85.33  Resp: 85.33  Degree: 0       Cost_io: 85.00  Cost_cpu: 11935345       Resp_io: 85.00  Resp_cpu: 11935345   Access Path: index (AllEqRange)     Index: IND_MACLEAN1     resc_io: 185.00  resc_cpu: 8449916     ix_sel: 0.500000  ix_sel_with_filters: 0.500000     Cost: 185.24  Resp: 185.24  Degree: 1   Best:: AccessPath: TableScan          Cost: 85.33  Degree: 1  Resp: 85.33  Card: 11319.50  Bytes: 0 ?????10053????????????,?????Density = 0.5 ?? 1/ NDV ??? ??????????????STATUS='INVALID"???????????, ????????????????? ????”STATUS”=’INVALID’ condition???2?,?status??????,??????dbms_stats?????????????,???CBO????INDEX Range ind_maclean1,???????,??????opitimizer?????? ?????????????????????????,????????,??????????status=’INVALID’???????card??,????????: [oracle@vrh4 ~]$ sqlplus / as sysdba SQL*Plus: Release 11.2.0.2.0 Production on Mon Oct 17 19:15:45 2011 Copyright (c) 1982, 2010, Oracle. All rights reserved. Connected to: Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production With the Partitioning, OLAP, Data Mining and Real Application Testing options SQL> select * from v$version; BANNER -------------------------------------------------------------------------------- Oracle Database 11g Enterprise Edition Release 11.2.0.2.0 - 64bit Production PL/SQL Release 11.2.0.2.0 - Production CORE 11.2.0.2.0 Production TNS for Linux: Version 11.2.0.2.0 - Production NLSRTL Version 11.2.0.2.0 - Production SQL> show parameter optimizer_fea NAME TYPE VALUE ------------------------------------ ----------- ------------------------------ optimizer_features_enable string 11.2.0.2 SQL> select * from global_name; GLOBAL_NAME -------------------------------------------------------------------------------- www.oracledatabase12g.com & www.askmaclean.com SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN',cascade=>true, method_opt=>'FOR ALL COLUMNS SIZE 2'); PL/SQL procedure successfully completed. ???????2?bucket????, ??????????????? ???Quest???Guy Harrison???????FREQUENCY????????,??????: rem rem Generate a histogram of data distribution in a column as recorded rem in dba_tab_histograms rem rem Guy Harrison Jan 2010 : www.guyharrison.net rem rem hexstr function is from From http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:707586567563 set pagesize 10000 set lines 120 set verify off col char_value format a10 heading "Endpoint|value" col bucket_count format 99,999,999 heading "bucket|count" col pct format 999.99 heading "Pct" col pct_of_max format a62 heading "Pct of|Max value" rem col endpoint_value format 9999999999999 heading "endpoint|value" CREATE OR REPLACE FUNCTION hexstr (p_number IN NUMBER) RETURN VARCHAR2 AS l_str LONG := TO_CHAR (p_number, 'fm' || RPAD ('x', 50, 'x')); l_return VARCHAR2 (4000); BEGIN WHILE (l_str IS NOT NULL) LOOP l_return := l_return || CHR (TO_NUMBER (SUBSTR (l_str, 1, 2), 'xx')); l_str := SUBSTR (l_str, 3); END LOOP; RETURN (SUBSTR (l_return, 1, 6)); END; / WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT nvl(endpoint_actual_value,endpoint_value) endpoint_value , bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data; WITH hist_data AS ( SELECT endpoint_value,endpoint_actual_value, NVL(LAG (endpoint_value) OVER (ORDER BY endpoint_value),' ') prev_value, endpoint_number, endpoint_number, endpoint_number - NVL (LAG (endpoint_number) OVER (ORDER BY endpoint_value), 0) bucket_count FROM dba_tab_histograms JOIN dba_tab_col_statistics USING (owner, table_name,column_name) WHERE owner = '&owner' AND table_name = '&table' AND column_name = '&column' AND histogram='FREQUENCY') SELECT hexstr(endpoint_value) char_value, bucket_count, ROUND(bucket_count*100/SUM(bucket_count) OVER(),2) PCT, RPAD(' ',ROUND(bucket_count*50/MAX(bucket_count) OVER()),'*') pct_of_max FROM hist_data ORDER BY endpoint_value; ?????,??????????FREQUENCY?????: ??dbms_stats ?????STATUS=’INVALID’ bucket count=9 percent = 0.04 ,??????10053 trace????????: SQL> explain plan for select * from maclean where status='INVALID'; Explained. SQL>  select * from table(dbms_xplan.display()); PLAN_TABLE_OUTPUT ------------------------------------- Plan hash value: 3087014066 ------------------------------------------------------------------------------------------- | Id  | Operation                   | Name        | Rows  | Bytes | Cost (%CPU)| Time     | ------------------------------------------------------------------------------------------- |   0 | SELECT STATEMENT            |             |     9 |   837 |     2   (0)| 00:00:01 | |   1 |  TABLE ACCESS BY INDEX ROWID| MACLEAN     |     9 |   837 |     2   (0)| 00:00:01 | |*  2 |   INDEX RANGE SCAN          | IND_MACLEAN |     9 |       |     1   (0)| 00:00:01 | ------------------------------------------------------------------------------------------- Predicate Information (identified by operation id): ---------------------------------------------------    2 - access("STATUS"='INVALID') ??????????????CBO???????STATUS=’INVALID’?cardnality?? , ??????????? ,??index range scan??Full table scan? ????????????????10053 trace: SQL> alter system flush shared_pool; System altered. SQL> oradebug setmypid; Statement processed. SQL> oradebug event 10053 trace name context forever ,level 1; Statement processed. SQL> explain plan for select * from maclean where status='INVALID'; Explained. SINGLE TABLE ACCESS PATH Single Table Cardinality Estimation for MACLEAN[MACLEAN] Column (#10): NewDensity:0.000199, OldDensity:0.000022 BktCnt:22640, PopBktCnt:22640, PopValCnt:2, NDV:2 ???NewDensity= bucket_count / SUM(bucket_count) /2 Column (#10): STATUS( AvgLen: 7 NDV: 2 Nulls: 0 Density: 0.000199 Histogram: Freq #Bkts: 2 UncompBkts: 22640 EndPtVals: 2 Table: MACLEAN Alias: MACLEAN Card: Original: 22640.000000 Rounded: 9 Computed: 9.00 Non Adjusted: 9.00 Access Path: TableScan Cost: 85.30 Resp: 85.30 Degree: 0 Cost_io: 85.00 Cost_cpu: 10804625 Resp_io: 85.00 Resp_cpu: 10804625 Access Path: index (AllEqRange) Index: IND_MACLEAN resc_io: 2.00 resc_cpu: 20763 ix_sel: 0.000398 ix_sel_with_filters: 0.000398 Cost: 2.00 Resp: 2.00 Degree: 1 Best:: AccessPath: IndexRange Index: IND_MACLEAN Cost: 2.00 Degree: 1 Resp: 2.00 Card: 9.00 Bytes: 0 ???????????2 bucket?????CBO????????????,???????????????????,???dbms_stats.DEFAULT_METHOD_OPT????????????????????? ???dbms_stats?????????????????????col_usage$??????predicate???????,??col_usage$??<????????SMON??(?):??col_usage$????>? ??????????dbms_stats????????,col_usage$????????????predicate???,??dbms_stats??????????????????, ?: SQL> drop table maclean; Table dropped. SQL> create table maclean as select * from dba_objects; Table created. SQL> update maclean set status='INVALID' where owner='MACLEAN'; 2 rows updated. SQL> commit; Commit complete. SQL> create index ind_maclean on maclean(status); Index created. ??dbms_stats??method_opt??maclean? SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS old  12:    WHERE owner = '&owner' new  12:    WHERE owner = 'SYS' Enter value for table: MACLEAN old  13:      AND table_name = '&table' new  13:      AND table_name = 'MACLEAN' Enter value for column: STATUS old  14:      AND column_name = '&column' new  14:      AND column_name = 'STATUS' no rows selected ????col_usage$?????,????????status????? declare begin for i in 1..500 loop execute immediate ' alter system flush shared_pool'; DBMS_STATS.FLUSH_DATABASE_MONITORING_INFO; execute immediate 'select count(*) from maclean where status=''INVALID'' ' ; end loop; end; / PL/SQL procedure successfully completed. SQL> select obj# from obj$ where name='MACLEAN';       OBJ# ----------      97215 SQL> select * from  col_usage$ where  OBJ#=97215;       OBJ#    INTCOL# EQUALITY_PREDS EQUIJOIN_PREDS NONEQUIJOIN_PREDS RANGE_PREDS LIKE_PREDS NULL_PREDS TIMESTAMP ---------- ---------- -------------- -------------- ----------------- ----------- ---------- ---------- ---------      97215          1              1              0                 0           0          0          0 17-OCT-11      97215         10            499              0                 0           0          0          0 17-OCT-11 SQL> exec dbms_stats.gather_table_stats('SYS','MACLEAN'); PL/SQL procedure successfully completed. @histogram.sql Enter value for owner: SYS Enter value for table: MACLEAN Enter value for column: STATUS Endpoint        bucket         Pct of value            count     Pct Max value ---------- ----------- ------- -------------------------------------------------------------- INVALI               2     .04 VALIC3           5,453   99.96  *************************************************

    Read the article

  • how to enable SQL Application Role via Entity Framework

    - by Ehsan Farahani
    I'm now developing big government application with entity framework. at first i have one problem about enable SQL application role. with ado.net I'm using below code: SqlCommand cmd = new SqlCommand("sys.sp_setapprole"); cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = _sqlConn; SqlParameter paramAppRoleName = new SqlParameter(); paramAppRoleName.Direction = ParameterDirection.Input; paramAppRoleName.ParameterName = "@rolename"; paramAppRoleName.Value = "AppRole"; cmd.Parameters.Add(paramAppRoleName); SqlParameter paramAppRolePwd = new SqlParameter(); paramAppRolePwd.Direction = ParameterDirection.Input; paramAppRolePwd.ParameterName = "@password"; paramAppRolePwd.Value = "123456"; cmd.Parameters.Add(paramAppRolePwd); SqlParameter paramCreateCookie = new SqlParameter(); paramCreateCookie.Direction = ParameterDirection.Input; paramCreateCookie.ParameterName = "@fCreateCookie"; paramCreateCookie.DbType = DbType.Boolean; paramCreateCookie.Value = 1; cmd.Parameters.Add(paramCreateCookie); SqlParameter paramEncrypt = new SqlParameter(); paramEncrypt.Direction = ParameterDirection.Input; paramEncrypt.ParameterName = "@encrypt"; paramEncrypt.Value = "none"; cmd.Parameters.Add(paramEncrypt); SqlParameter paramEnableCookie = new SqlParameter(); paramEnableCookie.ParameterName = "@cookie"; paramEnableCookie.DbType = DbType.Binary; paramEnableCookie.Direction = ParameterDirection.Output; paramEnableCookie.Size = 1000; cmd.Parameters.Add(paramEnableCookie); try { cmd.ExecuteNonQuery(); SqlParameter outVal = cmd.Parameters["@cookie"]; // Store the enabled cookie so that approle can be disabled with the cookie. _appRoleEnableCookie = (byte[]) outVal.Value; } catch (Exception ex) { result = false; msg = "Could not execute enable approle proc." + Environment.NewLine + ex.Message; } But no matter how much I searched I could not find a way to implement on EF. Another question is: how to Add Application Role to Entity data model designer? I'm using the below code for execute parameter with EF: AEntities ar = new AEntities(); DbConnection con = ar.Connection; con.Open(); msg = ""; bool result = true; DbCommand cmd = con.CreateCommand(); cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = con; var d = new DbParameter[]{ new SqlParameter{ ParameterName="@r", Value ="AppRole",Direction = ParameterDirection.Input} , new SqlParameter{ ParameterName="@p", Value ="123456",Direction = ParameterDirection.Input} }; string sql = "EXEC " + procName + " @rolename=@r,@password=@p"; var s = ar.ExecuteStoreCommand(sql, d); When run ExecuteStoreCommand this line return error: Application roles can only be activated at the ad hoc level.

    Read the article

< Previous Page | 247 248 249 250 251 252 253 254 255 256 257 258  | Next Page >