Search Results

Search found 105847 results on 4234 pages for 'sql server performance'.

Page 398/4234 | < Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >

  • Do I need to transfer Server license CALs to new Domain Controller during AD transition?

    - by drpcken
    I have an old Server 2003 domain controller I'm ready to decommission. I notice in Server 2003 there is a Licensing module under Administrative Tools that seems to manage and track user CAL's for the domain controller. I don't see this on my newly promoted Server 2008 domain controller, nor do I see any roles to add it. Does this need to be transferred to my new Server 2008 domain controller or will it all happen when the old server is decommissioned? I've already transferred all my Terminal Server licenses to the new server. Thank you!

    Read the article

  • SQLIO help decipher output

    - by SQL Learner
    When load testing on a SQL Server Box, using following (testfile is 25 GB) sqlio -kW -t8 -s360 -o8 -frandom -b8 -BH -LS g:\testfile.dat > result.txt sqlio -kW -t8 -s360 -o8 -frandom -b64 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b128 -BH -LS g:\testfile.dat >> result.txt sqlio -kW -t8 -s360 -o8 -frandom -b256 -BH -LS g:\testfile.dat >> result.txt Can anyone help me decipher output.. I do not understand latency min and average....? What does this number means IOs/sec: 10968.80 MBs/sec: 685.55 latency metrics: Min_Latency(ms): 1 Avg_Latency(ms): 5 Max_Latency(ms): 21

    Read the article

  • Do fluent interfaces significantly impact runtime performance of a .NET application?

    - by stakx
    I'm currently occupying myself with implementing a fluent interface for an existing technology, which would allow code similar to the following snippet: using (var directory = Open.Directory(@"path\to\some\directory")) { using (var file = Open.File("foobar.html").In(directory)) { // ... } } In order to implement such constructs, classes are needed that accumulate arguments and pass them on to other objects. For example, to implement the Open.File(...).In(...) construct, you would need two classes: // handles 'Open.XXX': public static class OpenPhrase { // handles 'Open.File(XXX)': public static OpenFilePhrase File(string filename) { return new OpenFilePhrase(filename); } // handles 'Open.Directory(XXX)': public static DirectoryObject Directory(string path) { // ... } } // handles 'Open.File(XXX).XXX': public class OpenFilePhrase { internal OpenFilePhrase(string filename) { _filename = filename } // handles 'Open.File(XXX).In(XXX): public FileObject In(DirectoryObject directory) { // ... } private readonly string _filename; } That is, the more constituent parts statements such as the initial examples have, the more objects need to be created for passing on arguments to subsequent objects in the chain until the actual statement can finally execute. Question: I am interested in some opinions: Does a fluent interface which is implemented using the above technique significantly impact the runtime performance of an application that uses it? With runtime performance, I refer to both speed and memory usage aspects. Bear in mind that a potentially large number of temporary, argument-saving objects would have to be created for only very brief timespans, which I assume may put a certain pressure on the garbage collector. If you think there is significant performance impact, do you know of a better way to implement fluent interfaces?

    Read the article

  • How do I improve my performance with this singly linked list struct within my program?

    - by Jesus
    Hey guys, I have a program that does operations of sets of strings. We have to implement functions such as addition and subtraction of two sets of strings. We are suppose to get it down to the point where performance if of O(N+M), where N,M are sets of strings. Right now, I believe my performance is at O(N*M), since I for each element of N, I go through every element of M. I'm particularly focused on getting the subtraction to the proper performance, as if I can get that down to proper performance, I believe I can carry that knowledge over to the rest of things I have to implement. The '-' operator is suppose to work like this, for example. Declare set1 to be an empty set. Declare set2 to be a set with { a b c } elements Declare set3 to be a set with ( b c d } elements set1 = set2 - set3 And now set1 is suppose to equal { a }. So basically, just remove any element from set3, that is also in set2. For the addition implementation (overloaded '+' operator), I also do the sorting of the strings (since we have to). All the functions work right now btw. So I was wondering if anyone could a) Confirm that currently I'm doing O(N*M) performance b) Give me some ideas/implementations on how to improve the performance to O(N+M) Note: I cannot add any member variables or functions to the class strSet or to the node structure. The implementation of the main program isn't very important, but I will post the code for my class definition and the implementation of the member functions: strSet2.h (Implementation of my class and struct) // Class to implement sets of strings // Implements operators for union, intersection, subtraction, // etc. for sets of strings // V1.1 15 Feb 2011 Added guard (#ifndef), deleted using namespace RCH #ifndef _STRSET_ #define _STRSET_ #include <iostream> #include <vector> #include <string> // Deleted: using namespace std; 15 Feb 2011 RCH struct node { std::string s1; node * next; }; class strSet { private: node * first; public: strSet (); // Create empty set strSet (std::string s); // Create singleton set strSet (const strSet &copy); // Copy constructor ~strSet (); // Destructor int SIZE() const; bool isMember (std::string s) const; strSet operator + (const strSet& rtSide); // Union strSet operator - (const strSet& rtSide); // Set subtraction strSet& operator = (const strSet& rtSide); // Assignment }; // End of strSet class #endif // _STRSET_ strSet2.cpp (implementation of member functions) #include <iostream> #include <vector> #include <string> #include "strset2.h" using namespace std; strSet::strSet() { first = NULL; } strSet::strSet(string s) { node *temp; temp = new node; temp->s1 = s; temp->next = NULL; first = temp; } strSet::strSet(const strSet& copy) { if(copy.first == NULL) { first = NULL; } else { node *n = copy.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } strSet::~strSet() { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } } int strSet::SIZE() const { int size = 0; node *temp = first; while(temp!=NULL) { size++; temp=temp->next; } return size; } bool strSet::isMember(string s) const { node *temp = first; while(temp != NULL) { if(temp->s1 == s) { return true; } temp = temp->next; } return false; } strSet strSet::operator + (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string newEle = temp->s1; if(!isMember(newEle)) { if(newSet.first==NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first = newNode; } else if(newSet.SIZE() == 1) { if(newEle < newSet.first->s1) { node *tempNext = newSet.first; node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = tempNext; newSet.first = newNode; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; newSet.first->next = newNode; } } else { node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if(newEle < curr->s1) { if(prev == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; newSet.first = newNode; break; } else { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = curr; prev->next = newNode; break; } } if(curr->next == NULL) { node *newNode; newNode = new node; newNode->s1 = newEle; newNode->next = NULL; curr->next = newNode; break; } prev = curr; curr = curr->next; } } } temp = temp->next; } return newSet; } strSet strSet::operator - (const strSet& rtSide) { strSet newSet; newSet = *this; node *temp = rtSide.first; while(temp != NULL) { string element = temp->s1; node *prev = NULL; node *curr = newSet.first; while(curr != NULL) { if( element < curr->s1 ) break; if( curr->s1 == element ) { if( prev == NULL) { node *duplicate = curr; newSet.first = newSet.first->next; delete duplicate; break; } else { node *duplicate = curr; prev->next = curr->next; delete duplicate; break; } } prev = curr; curr = curr->next; } temp = temp->next; } return newSet; } strSet& strSet::operator = (const strSet& rtSide) { if(this != &rtSide) { if(first != NULL) { while(first->next != NULL) { node *nextNode = first->next; first->next = nextNode->next; delete nextNode; } } if(rtSide.first == NULL) { first = NULL; } else { node *n = rtSide.first; node *prev = NULL; while (n) { node *newNode = new node; newNode->s1 = n->s1; newNode->next = NULL; if (prev) { prev->next = newNode; } else { first = newNode; } prev = newNode; n = n->next; } } } return *this; }

    Read the article

  • The best way to predict performance without actually porting the code?

    - by ardiyu07
    I believe there are people with the same experience with me, where he/she must give a (estimated) performance report of porting a program from sequential to parallel with some designated multicore hardwares, with a very few amount of time given. For instance, if a 10K LoC sequential program was given and executes on Intel i7-3770k (not vectorized) in 100 ms, how long would it take to run if one parallelizes the code to a Tesla C2075 with NVIDIA CUDA, given that all kinds of parallelizing optimization techniques were done? (but you're only given 2-4 days to report the performance? assume that you didn't know the algorithm at all. Or perhaps it'd be safer if we just assume that it's an impossible situation to finish the job) Therefore, I'm wondering, what most likely be the fastest way to give such performance report? Is it safe to calculate solely by the hardware's capability, such as GFLOPs peak and memory bandwidth rate? Is there a mathematical way to calculate it? If there is, please prove your method with the corresponding problem description and the algorithm, and also the target hardwares' specifications. Or perhaps there already exists such tool to (roughly) estimate code porting? (Please don't the answer: 'kill yourself is the fastest way.')

    Read the article

  • Visual Studio 2010 on Macbook Air

    - by Kyle B.
    Does anyone here run Visual Studio 2010 (or VS12 RC) on a Macbook Air? I have the current model with 4GB ram, 13" screen, and 256GB SSD drive. Before I go through the effort of configuring this, I'd like to know if anyone from the community has done this and: Was the performance acceptable? If it is, I plan to get a larger cinema display monitor as a second display and do all my coding on this machine ditching my desktop. Did you use Boot camp, Parallels, or VMWare? I feel to maximize performance that boot camp would be necessary to make the most utilization of the memory, but am not sure if this completely necessary. I'd prefer to use a VM, but wasn't sure if this was practical and would value your input before buying a license. Did you also run anything else on the Windows installation, such as SQL Server express, IISExpress, etc? Did performance lag after a certain point? Note: I would have asked this in superuser.com, but felt this applied more directly to the programming community.

    Read the article

  • Information about how much time in spent in a function, based on the input of this function

    - by olchauvin
    Is there a (quantitative) tool to measure performance of functions based on its input? So far, the tools I used to measure performance of my code, tells me how much time I spent in functions (like Jetbrain Dottrace for .Net), but I'd like to have more information about the parameters passed to the function in order to know which parameters impact the most the performance. Let's say that I have function like that: int myFunction(int myParam1, int myParam 2) { // Do and return something based on the value of myParam1 and myParam2. // The code is likely to use if, for, while, switch, etc.... } If would like a tool that would allow me to tell me how much time is spent in myFunction based on the value of myParam1 and myParam2. For example, the tool would give me a result looking like this: For "myFunction" : value | value | Number of | Average myParam1 | myParam2 | call | time ---------|----------|-----------|-------- 1 | 5 | 500 | 301 ms 2 | 5 | 250 | 1253 ms 3 | 7 | 1268 | 538 ms ... That would mean that myFunction has been call 500 times with myParam1=1 and myParam2=5, and that with those parameters, it took on average 301ms to return a value. The idea behind that is to do some statistical optimization by organizing my code such that, the blocs of codes that are the most likely to be executed are tested before the one that are less likely to be executed. To put it bluntly, if I know which values are used the most, I can reorganize the if/while/for etc.. structure of the function (and the whole program) to optimize it. I'd like to find such tools for C++, Java or.Net. Note: I am not looking for technical tips to optimize the code (like passing parameters as const, inlining functions, initializing the capacity of vectors and the like).

    Read the article

  • Helping to Reduce Page Compression Failures Rate

    - by Vasil Dimov
    When InnoDB compresses a page it needs the result to fit into its predetermined compressed page size (specified with KEY_BLOCK_SIZE). When the result does not fit we call that a compression failure. In this case InnoDB needs to split up the page and try to compress again. That said, compression failures are bad for performance and should be minimized.Whether the result of the compression will fit largely depends on the data being compressed and some tables and/or indexes may contain more compressible data than others. And so it would be nice if the compression failure rate, along with other compression stats, could be monitored on a per table or even on a per index basis, wouldn't it?This is where the new INFORMATION_SCHEMA table in MySQL 5.6 kicks in. INFORMATION_SCHEMA.INNODB_CMP_PER_INDEX provides exactly this helpful information. It contains the following fields: +-----------------+--------------+------+ | Field | Type | Null | +-----------------+--------------+------+ | database_name | varchar(192) | NO | | table_name | varchar(192) | NO | | index_name | varchar(192) | NO | | compress_ops | int(11) | NO | | compress_ops_ok | int(11) | NO | | compress_time | int(11) | NO | | uncompress_ops | int(11) | NO | | uncompress_time | int(11) | NO | +-----------------+--------------+------+ similarly to INFORMATION_SCHEMA.INNODB_CMP, but this time the data is grouped by "database_name,table_name,index_name" instead of by "page_size".So a query like SELECT database_name, table_name, index_name, compress_ops - compress_ops_ok AS failures FROM information_schema.innodb_cmp_per_index ORDER BY failures DESC; would reveal the most problematic tables and indexes that have the highest compression failure rate.From there on the way to improving performance would be to try to increase the compressed page size or change the structure of the table/indexes or the data being stored and see if it will have a positive impact on performance.

    Read the article

  • how to enable SQL Application Role via Entity Framework

    - by Ehsan Farahani
    I'm now developing big government application with entity framework. at first i have one problem about enable SQL application role. with ado.net I'm using below code: SqlCommand cmd = new SqlCommand("sys.sp_setapprole"); cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = _sqlConn; SqlParameter paramAppRoleName = new SqlParameter(); paramAppRoleName.Direction = ParameterDirection.Input; paramAppRoleName.ParameterName = "@rolename"; paramAppRoleName.Value = "AppRole"; cmd.Parameters.Add(paramAppRoleName); SqlParameter paramAppRolePwd = new SqlParameter(); paramAppRolePwd.Direction = ParameterDirection.Input; paramAppRolePwd.ParameterName = "@password"; paramAppRolePwd.Value = "123456"; cmd.Parameters.Add(paramAppRolePwd); SqlParameter paramCreateCookie = new SqlParameter(); paramCreateCookie.Direction = ParameterDirection.Input; paramCreateCookie.ParameterName = "@fCreateCookie"; paramCreateCookie.DbType = DbType.Boolean; paramCreateCookie.Value = 1; cmd.Parameters.Add(paramCreateCookie); SqlParameter paramEncrypt = new SqlParameter(); paramEncrypt.Direction = ParameterDirection.Input; paramEncrypt.ParameterName = "@encrypt"; paramEncrypt.Value = "none"; cmd.Parameters.Add(paramEncrypt); SqlParameter paramEnableCookie = new SqlParameter(); paramEnableCookie.ParameterName = "@cookie"; paramEnableCookie.DbType = DbType.Binary; paramEnableCookie.Direction = ParameterDirection.Output; paramEnableCookie.Size = 1000; cmd.Parameters.Add(paramEnableCookie); try { cmd.ExecuteNonQuery(); SqlParameter outVal = cmd.Parameters["@cookie"]; // Store the enabled cookie so that approle can be disabled with the cookie. _appRoleEnableCookie = (byte[]) outVal.Value; } catch (Exception ex) { result = false; msg = "Could not execute enable approle proc." + Environment.NewLine + ex.Message; } But no matter how much I searched I could not find a way to implement on EF. Another question is: how to Add Application Role to Entity data model designer? I'm using the below code for execute parameter with EF: AEntities ar = new AEntities(); DbConnection con = ar.Connection; con.Open(); msg = ""; bool result = true; DbCommand cmd = con.CreateCommand(); cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = con; var d = new DbParameter[]{ new SqlParameter{ ParameterName="@r", Value ="AppRole",Direction = ParameterDirection.Input} , new SqlParameter{ ParameterName="@p", Value ="123456",Direction = ParameterDirection.Input} }; string sql = "EXEC " + procName + " @rolename=@r,@password=@p"; var s = ar.ExecuteStoreCommand(sql, d); When run ExecuteStoreCommand this line return error: Application roles can only be activated at the ad hoc level.

    Read the article

  • Setting collation property in the connection string to SQL Server 2005

    - by user369745
    I have a ASP.Net web application with connection string for SQL Server 2005 in the web.config. Data Source=ABCSERVER;Network Library=DBMSSOCN;Initial Catalog=myDataBase; User ID=myUsername;Password=myPassword; I want to specify the collation property in the web.config for different languages like French like Data Source=ABCSERVER;Network Library=DBMSSOCN;Initial Catalog=myDataBase; User ID=myUsername;Password=myPassword;Collation=French_CS_AS But the Collation word is not valid in the connection string. What is the correct keyword that we need to use to specify the collation in SQL Server 2005 connection string?

    Read the article

  • EF4, self tracking, repository pattern, SQL Server 2008 AND SQL Server Compact

    - by Darren
    Hi, I am creating a project using Entity Frameworks 4 and self tracking entities. I want to be able to either get the data from a sql server 2008 database or from sql server compact database (with the switch being in the config file). I am using the repository pattern and I will have the self tracking entities sitting in a separate assembly. Do I need two edmx files? If so, how do I generate only one set of STE's in the separate assembly? Also do I need to generate two context classes as well? I am unsure of the plumbing for all this. Can anyone help? Darren I forgot to add that the two databases will be identical and that the compact version is for offline usage.

    Read the article

  • Unable to attach "AdventureWorks2008" Sample Database to a named Instance in SQL Server 2008

    - by uzorick
    First of all "Northwind" and "AdventureWorksDW2008" databases attached without problem, but "AdventureWorks2008" fails with the following error. // Msg 5120, Level 16, State 105, Line 1 Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Documents". Operating system error 2: "2(The system cannot find the file specified.)". Msg 5105, Level 16, State 14, Line 1 A file activation error occurred. The physical file name 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Documents' may be incorrect. Diagnose and correct additional errors, and retry the operation. Msg 1813, Level 16, State 2, Line 1 Could not open new database 'AdventureWorks2008'. CREATE DATABASE is aborted. // PS: I did not use the default database instance "MSSQLSERVER" during install, so Where is it finding this path "C:...\MSSQL10.MSSQLSERVER...\Documents"?

    Read the article

  • Nullable One To One Relationships with Integer Keys in LINQ-to-SQL

    - by Craig Walker
    I have two objects (Foo and Bar) that have a one-to-zero-or-one relationship between them. So, Foo has a nullable foreign key reference to Bar.ID and a (nullbusted) unique index to enforce the "1" side. Bar.ID is an int, and so Foo.BarID is a nullable int. The problem occurs in the LINQ-to-SQL DBML mapping of .NET types to SQL datatypes. Since int is not a nullable type in .NET, it gets wrapped in a Nullable<int>. However, this is not the same type as int, and so Visual Studio gives me this error message when I try to create the OneToOne Association between them: Cannot create an association "Bar_Foo". Properties do not have matching types: "ID", "BarID". Is there a way around this?

    Read the article

  • SQL Server 2008 log issue

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. Under logs folder, in my machine it is C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log, there are three kinds of files, ERRORLOG, ERRORLOG.1, ERRORLOG.2 ... ERRORLOG.6; FDLAUNCHERRORLOG, FDLAUNCHERRORLOG.1, FDLAUNCHERRORLOG.2, ... FDLAUNCHERRORLOG.6; log_207.trc, log_208.trc, ... My question is what are the differnet function of such log files? And why there are files ends with .1, .2, etc? thanks in advance, George

    Read the article

  • notepad sql Unicode and Non Unicode

    - by RBrattas
    Hi, I have a Microsoft Notepad flate file with data and Vertical Bar as column delimiter. I get following message: cannot convert between unicode and non-unicode string data types It seems it is my nvarchar(max) that creates my problem. I changed to varchar(max); but still the same problem. How do I insert my flate file into my SQL Server 2005? And in the SQL Server 2005 import and export wizard the flate file source advanced tab the OutputColumnWith is 50. Will that say my flate file column is max 50? I hope not because my column is more then 50... Thank you, Rune

    Read the article

  • Linq to SQL, Repository, IList and Persist All

    - by Dr. Zim
    This discusses a repository which returns IList that also uses Linq to SQL as a DAL. Once you do a .ToList(), IQueryable object is gone once you exit the Repository. This means that I need to send the objects back in to the Repo methods .Create(Model model), .Update(Model model), and .Delete(int ID). Assuming that is correct, how do you do the PersistAll()? For example, if you did the following, how would you code that in the repository? Changed a single string property in the object Called .Update(object); Changed a different string property in the object Called .Update(object); Called .PersistAll(), which would update the database with both changed strings. How would you associate the objects in the Repository parameters with the objects in the Linq to Sql data context, especially over multiple calls? I am sure this is a standard thing. Links to examples on the web would be great!

    Read the article

  • SQL Server: Database stuck in "Restoring" state

    - by Ian Boyd
    i backed up a data: BACKUP DATABASE MyDatabase TO DISK = 'MyDatabase.bak' WITH INIT --overwrite existing And then tried to restore it: RESTORE DATABASE MyDatabase FROM DISK = 'MyDatabase.bak' WITH REPLACE --force restore over specified database And now the database is stuck in the restoring state. Some people have theorized that it's because there was no log file in the backup, and it needed to be rolled forward using: RESTORE DATABASE MyDatabase WITH RECOVERY Except that, of course, fails: Msg 4333, Level 16, State 1, Line 1 The database cannot be recovered because the log was not restored. Msg 3013, Level 16, State 1, Line 1 RESTORE DATABASE is terminating abnormally. And exactly what you want in a catastrophic situation is a restore that won't work. The backup contains both a data and log file: RESTORE FILELISTONLY FROM DISK = 'MyDatabase.bak' Logical Name PhysicalName ============= =============== MyDatabase C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase.mdf MyDatabase_log C:\Program Files\Microsoft SQL Server\MSSQL.1\MSSQL\DATA\MyDatabase_log.LDF

    Read the article

  • SqlDateTime overflow on INSERT when date is correct using a Linq to SQL DataContext

    - by Jan Hoefnagels
    Dear Linq experts, I get an SqlDateTime overflow error (Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) when doing an INSERT using an Linq DataContext connected to SQL Server database when I do the SubmitChanges(). When I use the debugger the date value is correct. Even if I temporary update the code to set the date value to DateTime.Now it will not do the insert. Did anybody found a work-around for this behaviour? Maybe there is a way to check what SQL the datacontext submits to the database.

    Read the article

  • sqlite - any improvements for this attach code (running multiple sql commands transactionally in sql

    - by Greg
    Hi, Is this code solid? I've tried to use "using" etc. Basically a method to pass as sequenced list of SQL commands to be run against a Sqlite database. I assume it is true that in sqlite by default all commands run in a single connection are handled transactionally? Is this true? i.e. I should not have to (and haven't got in the code at the moment) a BeginTransaction, or CommitTransaction. It's using http://sqlite.phxsoftware.com/ as the sqlite ADO.net database provider. private int ExecuteNonQueryTransactionally(List<string> sqlList) { int totalRowsUpdated = 0; using (var conn = new SQLiteConnection(_connectionString)) { // Open connection (one connection so should be transactional - confirm) conn.Open(); // Apply each SQL statement passed in to sqlList foreach (string s in sqlList) { using (var cmd = new SQLiteCommand(conn)) { cmd.CommandText = s; totalRowsUpdated = totalRowsUpdated + cmd.ExecuteNonQuery(); } } } return totalRowsUpdated; }

    Read the article

  • Using Raw SQL with Doctrine

    - by Levi Hackwith
    I have some extremely complex queries that I need to use to generate a report in my application. I'm using symfony as my framework and doctrine as my ORM. My question is this: What is the best way to pass in highly-complex sql queries directly to Doctrine without converting them to the Doctrine Query Language? I've been reading about the Raw_SQL extension but it appears that you still need to pass the query in sections (like from()). Is there anything for just dumping in a bunch of raw sql commands?

    Read the article

  • Generated SQL with PredicateBuilder, LINQPad and operator ANY

    - by Sig. Tolleranza
    I previously asked a question about chaining conditions in Linq To Entities. Now I use LinqKit and everything works fine. I want to see the generated SQL and after reading this answer, I use LinqPad. This is my statement: var predProduct = PredicateBuilder.True<Product>(); var predColorLanguage = PredicateBuilder.True<ColorLanguage>(); predProduct = predProduct.And(p => p.IsComplete); predColorLanguage = predColorLanguage.And(c => c.IdColorEntity.Products.AsQueryable().Any(expr)); ColorLanguages.Where(predColorLanguage).Dump(); The code works in VS2008, compile and produce the correct result set, but in LinqPad, I've the following error: NotSupportedException: The overload query operator 'Any' used is not Supported. How can I see the generated SQL if LINQPad fails?

    Read the article

  • SQL Server CE - Internal error: Cannot open the shared memory region

    - by blu
    I have a SQL Server CE database that works fine in dev, but when installed on the client has an issue. The SQL Server CE 3.5 dependencies are copied as part of the deployment. The target machine is a clean Windows 7 32-bit Ultimate image. The message for the exception in the event log is: Internal error: Cannot open the shared memory region. It looks like this is SSCE_M_CANTOPENSHAREDMEMORY and the site says there isn't a connection string value to change this and that these issues are typically not resolvable by the end developers. Has anyone run into this, and if so were you able to resolve this issue?

    Read the article

  • How to log subsonic3 sql

    - by bastos.sergio
    Hi, I'm starting to develop a new asp.net application based on subsonic3 (for queries) and log4net (for logs) and would like to know how to interface subsonic3 with log4net so that log4net logs the underlying sql used by subsonic. This is what I have so far: public static IEnumerable<arma_ocorrencium> ListArmasOcorrencia() { if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: start"); } var db = new BdvdDB(); var select = from p in db.arma_ocorrencia select p; var results = select.ToList<arma_ocorrencium>(); //Execute the query here if (logger.IsInfoEnabled) { // log sql here } if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: end"); } return results; }

    Read the article

  • Query only the first detail record for each master record

    - by Neal S.
    If I have the following master-detail relationship: owner_tbl auto_tbl --------- -------- owner --- owner auto year And I have the following table data: owner_tbl auto_tbl --------- -------- john john, corvette, 1968 john, prius, 2008 james james, f-150, 2004 james, cadillac, 2002 james, accord, 2009 jeff jeff, tesla, 2010 jeff, hyundai, 1996 Now, I want to perform a query that returns the following result: john, corvette, 1968 jeff, hyundai, 1996 james, cadillac, 2002 The query should join the two tables, and sort all the records on the "year" field, but only return the first detail record for each master record. I know how to join the tables and sort on the "year" field, but it's not clear how (or if) I might be able to only retrieve the first joined record for each owner. Three related questions: Can I perform this kind of query using LINQ-to-SQL? Can I perform the query using T-SQL? Would it be best to just create a stored procedure for the query given its likely complexity?

    Read the article

< Previous Page | 394 395 396 397 398 399 400 401 402 403 404 405  | Next Page >