Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 289/3184 | < Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >

  • Using unordered_multimap as entity and component storage

    - by natebot13
    The Setup I've made a few games (more like animations) using the Object Oriented method with base classes for objects that extend them, and objects that extend those, and found I couldn't wrap my head around expanding that system to larger game ideas. So I did some research and discovered the Entity-Component system of designing games. I really like the idea, and thoroughly understood the usefulness of it after reading Byte54's perfect answer here: Role of systems in entity systems architecture. With that said, I have decided to create my current game idea using the described Entity-Component system. Having basic knowledge of C++, and SFML, I would like to implement the backbone of this entity component system using an unordered_multimap without classes for the entities themselves. Here's the idea: An unordered_mulitmap stores entity IDs as the lookup term, while the value is an inherited Component object. Examlpe: ____________________________ |ID |Component | ---------------------------- |0 |Movable | |0 |Accelable | |0 |Renderable | |1 |Movable | |1 |Renderable | |2 |Renderable | ---------------------------- So, according to this map of objects, the entity with ID 0 has three components: Movable, Accelable, and Renderable. These component objects store the entity specific data, such as the location, the acceleration, and render flags. The entity is simply and ID, with the components attached to that ID describing its attributes. Problem I want to store the component objects within the map, allowing the map have full ownership of the components. The problem I'm having, is I don't quite understand enough about pointers, shared pointers, and references in order to get that set up. How can I go about initializing these components, with their various member variables, within the unordered_multimap? Can the base component class take on the member variables of its child classes, when defining the map as unordered_multimap<int, component>? Requirements I need a system to be able to grab an entity, with all of its' attached components, and access members from the components in order to do the necessary calculations and reassignments for position, velocity, etc. Need a clarification? Post a comment with your concerns and I will gladly edit or comment back! Thanks in advance! natebot13

    Read the article

  • Using SQL tables for storing user created level stats. Is there a better way?

    - by Ivan
    I am developing a racing game in which players can create their own tracks and upload them to a server. Players will be able to compare their best track times to their friends and see world records. I was going to generate a table for each track submitted to store the best times of each player who plays the track. However, I can't predict how many will be uploaded and I imagine too many tables might cause problems, or is this a valid method? I considered saving each player's best times in a string in a single table field like so: level1:00.45;level2:00.43;level3:00.12 If I did this I wouldn't need a separate table for each level (each level could just have a row in a 'WorldRecords' table). However, this just causes another problem because the text would eventually reach the limit for varchar length. I also considered storing the times data in XML files. This would avoid database issues and server disk space can be increased if needed. But I imagine this would be very slow. To update one players best time on one level, I would have to check every node in the file to find their time record to update. Apologies for the wall of text. Any suggestions would be appreciated.

    Read the article

  • Hidden Gems: Accelerating Oracle Data Integrator with SOA, Groovy, SDK, and XML

    - by Alex Kotopoulis
    On the last day of Oracle OpenWorld, we had a final advanced session on getting the most out of Oracle Data Integrator through the use of various advanced techniques. The primary way to improve your ODI processes is to choose the optimal knowledge modules for your load and take advantage of the optimized tools of your database, such as OracleDataPump and similar mechanisms in other databases. Knowledge modules also allow you to customize tasks, allowing you to codify best practices that are consistently applied by all integration developers. ODI SDK is another very powerful means to automate and speed up your integration development process. This allows you to automate Life Cycle Management, code comparison, repetitive code generation and change of your integration projects. The SDK is easily accessible through Java or scripting languages such as Groovy and Jython. Finally, all Oracle Data Integration products provide services that can be integrated into a larger Service Oriented Architecture. This moved data integration from an isolated environment into an agile part of a larger business process environment. All Oracle data integration products can play a part in thisracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle GoldenGate can integrate into business event streams by processing JMS queues or publishing new events based on database transactions. Oracle Data Integrator allows full control of its runtime sessions through web services, so that integration jobs can become part of business processes. Oracle Data Service Integrator provides a data virtualization layer over your distributed sources, allowing unified reading and updating for heterogeneous data without replicating and moving data. Oracle Enterprise Data Quality provides data quality services to cleanse and deduplicate your records through web services.

    Read the article

  • What's the proper term for a function inverse to a constructor - to unwrap a value from a data type?

    - by Petr Pudlák
    Edit: I'm rephrasing the question a bit. Apparently I caused some confusion because I didn't realize that the term destructor is used in OOP for something quite different - it's a function invoked when an object is being destroyed. In functional programming we (try to) avoid mutable state so there is no such equivalent to it. (I added the proper tag to the question.) Instead, I've seen that the record field for unwrapping a value (especially for single-valued data types such as newtypes) is sometimes called destructor or perhaps deconstructor. For example, let's have (in Haskell): newtype Wrap = Wrap { unwrap :: Int } Here Wrap is the constructor and unwrap is what? The questions are: How do we call unwrap in functional programming? Deconstructor? Destructor? Or by some other term? And to clarify, is this/other terminology applicable to other functional languages, or is it used just in the Haskell? Perhaps also, is there any terminology for this in general, in non-functional languages? I've seen both terms, for example: ... Most often, one supplies smart constructors and destructors for these to ease working with them. ... at Haskell wiki, or ... The general theme here is to fuse constructor - deconstructor pairs like ... at Haskell wikibook (here it's probably meant in a bit more general sense), or newtype DList a = DL { unDL :: [a] -> [a] } The unDL function is our deconstructor, which removes the DL constructor. ... in The Real World Haskell.

    Read the article

  • how to enable SQL Application Role via Entity Framework

    - by Ehsan Farahani
    I'm now developing big government application with entity framework. at first i have one problem about enable SQL application role. with ado.net I'm using below code: SqlCommand cmd = new SqlCommand("sys.sp_setapprole"); cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = _sqlConn; SqlParameter paramAppRoleName = new SqlParameter(); paramAppRoleName.Direction = ParameterDirection.Input; paramAppRoleName.ParameterName = "@rolename"; paramAppRoleName.Value = "AppRole"; cmd.Parameters.Add(paramAppRoleName); SqlParameter paramAppRolePwd = new SqlParameter(); paramAppRolePwd.Direction = ParameterDirection.Input; paramAppRolePwd.ParameterName = "@password"; paramAppRolePwd.Value = "123456"; cmd.Parameters.Add(paramAppRolePwd); SqlParameter paramCreateCookie = new SqlParameter(); paramCreateCookie.Direction = ParameterDirection.Input; paramCreateCookie.ParameterName = "@fCreateCookie"; paramCreateCookie.DbType = DbType.Boolean; paramCreateCookie.Value = 1; cmd.Parameters.Add(paramCreateCookie); SqlParameter paramEncrypt = new SqlParameter(); paramEncrypt.Direction = ParameterDirection.Input; paramEncrypt.ParameterName = "@encrypt"; paramEncrypt.Value = "none"; cmd.Parameters.Add(paramEncrypt); SqlParameter paramEnableCookie = new SqlParameter(); paramEnableCookie.ParameterName = "@cookie"; paramEnableCookie.DbType = DbType.Binary; paramEnableCookie.Direction = ParameterDirection.Output; paramEnableCookie.Size = 1000; cmd.Parameters.Add(paramEnableCookie); try { cmd.ExecuteNonQuery(); SqlParameter outVal = cmd.Parameters["@cookie"]; // Store the enabled cookie so that approle can be disabled with the cookie. _appRoleEnableCookie = (byte[]) outVal.Value; } catch (Exception ex) { result = false; msg = "Could not execute enable approle proc." + Environment.NewLine + ex.Message; } But no matter how much I searched I could not find a way to implement on EF. Another question is: how to Add Application Role to Entity data model designer? I'm using the below code for execute parameter with EF: AEntities ar = new AEntities(); DbConnection con = ar.Connection; con.Open(); msg = ""; bool result = true; DbCommand cmd = con.CreateCommand(); cmd.CommandType = CommandType.StoredProcedure; cmd.Connection = con; var d = new DbParameter[]{ new SqlParameter{ ParameterName="@r", Value ="AppRole",Direction = ParameterDirection.Input} , new SqlParameter{ ParameterName="@p", Value ="123456",Direction = ParameterDirection.Input} }; string sql = "EXEC " + procName + " @rolename=@r,@password=@p"; var s = ar.ExecuteStoreCommand(sql, d); When run ExecuteStoreCommand this line return error: Application roles can only be activated at the ad hoc level.

    Read the article

  • Setting collation property in the connection string to SQL Server 2005

    - by user369745
    I have a ASP.Net web application with connection string for SQL Server 2005 in the web.config. Data Source=ABCSERVER;Network Library=DBMSSOCN;Initial Catalog=myDataBase; User ID=myUsername;Password=myPassword; I want to specify the collation property in the web.config for different languages like French like Data Source=ABCSERVER;Network Library=DBMSSOCN;Initial Catalog=myDataBase; User ID=myUsername;Password=myPassword;Collation=French_CS_AS But the Collation word is not valid in the connection string. What is the correct keyword that we need to use to specify the collation in SQL Server 2005 connection string?

    Read the article

  • Unable to attach "AdventureWorks2008" Sample Database to a named Instance in SQL Server 2008

    - by uzorick
    First of all "Northwind" and "AdventureWorksDW2008" databases attached without problem, but "AdventureWorks2008" fails with the following error. // Msg 5120, Level 16, State 105, Line 1 Unable to open the physical file "C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Documents". Operating system error 2: "2(The system cannot find the file specified.)". Msg 5105, Level 16, State 14, Line 1 A file activation error occurred. The physical file name 'C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\DATA\Documents' may be incorrect. Diagnose and correct additional errors, and retry the operation. Msg 1813, Level 16, State 2, Line 1 Could not open new database 'AdventureWorks2008'. CREATE DATABASE is aborted. // PS: I did not use the default database instance "MSSQLSERVER" during install, so Where is it finding this path "C:...\MSSQL10.MSSQLSERVER...\Documents"?

    Read the article

  • EF4, self tracking, repository pattern, SQL Server 2008 AND SQL Server Compact

    - by Darren
    Hi, I am creating a project using Entity Frameworks 4 and self tracking entities. I want to be able to either get the data from a sql server 2008 database or from sql server compact database (with the switch being in the config file). I am using the repository pattern and I will have the self tracking entities sitting in a separate assembly. Do I need two edmx files? If so, how do I generate only one set of STE's in the separate assembly? Also do I need to generate two context classes as well? I am unsure of the plumbing for all this. Can anyone help? Darren I forgot to add that the two databases will be identical and that the compact version is for offline usage.

    Read the article

  • SQL Server 2008 log issue

    - by George2
    Hello everyone, I am using SQL Server 2008 Enterprise. Under logs folder, in my machine it is C:\Program Files\Microsoft SQL Server\MSSQL10.MSSQLSERVER\MSSQL\Log, there are three kinds of files, ERRORLOG, ERRORLOG.1, ERRORLOG.2 ... ERRORLOG.6; FDLAUNCHERRORLOG, FDLAUNCHERRORLOG.1, FDLAUNCHERRORLOG.2, ... FDLAUNCHERRORLOG.6; log_207.trc, log_208.trc, ... My question is what are the differnet function of such log files? And why there are files ends with .1, .2, etc? thanks in advance, George

    Read the article

  • Nullable One To One Relationships with Integer Keys in LINQ-to-SQL

    - by Craig Walker
    I have two objects (Foo and Bar) that have a one-to-zero-or-one relationship between them. So, Foo has a nullable foreign key reference to Bar.ID and a (nullbusted) unique index to enforce the "1" side. Bar.ID is an int, and so Foo.BarID is a nullable int. The problem occurs in the LINQ-to-SQL DBML mapping of .NET types to SQL datatypes. Since int is not a nullable type in .NET, it gets wrapped in a Nullable<int>. However, this is not the same type as int, and so Visual Studio gives me this error message when I try to create the OneToOne Association between them: Cannot create an association "Bar_Foo". Properties do not have matching types: "ID", "BarID". Is there a way around this?

    Read the article

  • SQL Server 2005 high memory usage and performance problems

    - by emzero
    Hi there guys. I have this ASP.NET/SQLServer2005 website running on a production server (Win2003, QuadCore, 4GB). The site runs smoothly normally, but after 2-3 weeks I notice a slow performance on the site (especifically in one particular page). Also I notice that the SQL Server process is using like 2GBs of RAM. So I restart the service, the site runs fast again and the process 300-400MBs. I'm looking for an explanation of why is this happening? What is SQL Server storing in RAM that takes too much space and degrades the performance? What can I do to avoid this? I'm trying to avoid restarting the SQLServer everytime this happens. Thank you!

    Read the article

  • notepad sql Unicode and Non Unicode

    - by RBrattas
    Hi, I have a Microsoft Notepad flate file with data and Vertical Bar as column delimiter. I get following message: cannot convert between unicode and non-unicode string data types It seems it is my nvarchar(max) that creates my problem. I changed to varchar(max); but still the same problem. How do I insert my flate file into my SQL Server 2005? And in the SQL Server 2005 import and export wizard the flate file source advanced tab the OutputColumnWith is 50. Will that say my flate file column is max 50? I hope not because my column is more then 50... Thank you, Rune

    Read the article

  • Linq to SQL, Repository, IList and Persist All

    - by Dr. Zim
    This discusses a repository which returns IList that also uses Linq to SQL as a DAL. Once you do a .ToList(), IQueryable object is gone once you exit the Repository. This means that I need to send the objects back in to the Repo methods .Create(Model model), .Update(Model model), and .Delete(int ID). Assuming that is correct, how do you do the PersistAll()? For example, if you did the following, how would you code that in the repository? Changed a single string property in the object Called .Update(object); Changed a different string property in the object Called .Update(object); Called .PersistAll(), which would update the database with both changed strings. How would you associate the objects in the Repository parameters with the objects in the Linq to Sql data context, especially over multiple calls? I am sure this is a standard thing. Links to examples on the web would be great!

    Read the article

  • SQL - .NET - SqlParameters - AddWithValue - Are there any negative performance implications when Par

    - by hamlin11
    http://msdn.microsoft.com/en-us/library/system.data.sqlclient.sqlparametercollection.addwithvalue.aspx I'm used to adding sql parameters to a sqlCommand using the add() function. This allows me to specify the type of the sqlParameter, but it requires another line to set the value. It's nice to use the AddWithValue function, but it skips the "specify the parameter type" step. I'm guessing this causes the parameters to be sent over as strings contained within single quotes (''), but I'm not sure. Is this the case, and does this cause significantly slower performance of the stored procedures? Note: I understand that it is nice to validate user data on the .NET side of things by specifying the data type for params -- I'm only concerned about reflection-type overhead of AddWithValue either on the .NET or SQL side.

    Read the article

  • SqlDateTime overflow on INSERT when date is correct using a Linq to SQL DataContext

    - by Jan Hoefnagels
    Dear Linq experts, I get an SqlDateTime overflow error (Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM.) when doing an INSERT using an Linq DataContext connected to SQL Server database when I do the SubmitChanges(). When I use the debugger the date value is correct. Even if I temporary update the code to set the date value to DateTime.Now it will not do the insert. Did anybody found a work-around for this behaviour? Maybe there is a way to check what SQL the datacontext submits to the database.

    Read the article

  • Using Raw SQL with Doctrine

    - by Levi Hackwith
    I have some extremely complex queries that I need to use to generate a report in my application. I'm using symfony as my framework and doctrine as my ORM. My question is this: What is the best way to pass in highly-complex sql queries directly to Doctrine without converting them to the Doctrine Query Language? I've been reading about the Raw_SQL extension but it appears that you still need to pass the query in sections (like from()). Is there anything for just dumping in a bunch of raw sql commands?

    Read the article

  • SQL Server CE - Internal error: Cannot open the shared memory region

    - by blu
    I have a SQL Server CE database that works fine in dev, but when installed on the client has an issue. The SQL Server CE 3.5 dependencies are copied as part of the deployment. The target machine is a clean Windows 7 32-bit Ultimate image. The message for the exception in the event log is: Internal error: Cannot open the shared memory region. It looks like this is SSCE_M_CANTOPENSHAREDMEMORY and the site says there isn't a connection string value to change this and that these issues are typically not resolvable by the end developers. Has anyone run into this, and if so were you able to resolve this issue?

    Read the article

  • sqlite - any improvements for this attach code (running multiple sql commands transactionally in sql

    - by Greg
    Hi, Is this code solid? I've tried to use "using" etc. Basically a method to pass as sequenced list of SQL commands to be run against a Sqlite database. I assume it is true that in sqlite by default all commands run in a single connection are handled transactionally? Is this true? i.e. I should not have to (and haven't got in the code at the moment) a BeginTransaction, or CommitTransaction. It's using http://sqlite.phxsoftware.com/ as the sqlite ADO.net database provider. private int ExecuteNonQueryTransactionally(List<string> sqlList) { int totalRowsUpdated = 0; using (var conn = new SQLiteConnection(_connectionString)) { // Open connection (one connection so should be transactional - confirm) conn.Open(); // Apply each SQL statement passed in to sqlList foreach (string s in sqlList) { using (var cmd = new SQLiteCommand(conn)) { cmd.CommandText = s; totalRowsUpdated = totalRowsUpdated + cmd.ExecuteNonQuery(); } } } return totalRowsUpdated; }

    Read the article

  • Generated SQL with PredicateBuilder, LINQPad and operator ANY

    - by Sig. Tolleranza
    I previously asked a question about chaining conditions in Linq To Entities. Now I use LinqKit and everything works fine. I want to see the generated SQL and after reading this answer, I use LinqPad. This is my statement: var predProduct = PredicateBuilder.True<Product>(); var predColorLanguage = PredicateBuilder.True<ColorLanguage>(); predProduct = predProduct.And(p => p.IsComplete); predColorLanguage = predColorLanguage.And(c => c.IdColorEntity.Products.AsQueryable().Any(expr)); ColorLanguages.Where(predColorLanguage).Dump(); The code works in VS2008, compile and produce the correct result set, but in LinqPad, I've the following error: NotSupportedException: The overload query operator 'Any' used is not Supported. How can I see the generated SQL if LINQPad fails?

    Read the article

  • How to log subsonic3 sql

    - by bastos.sergio
    Hi, I'm starting to develop a new asp.net application based on subsonic3 (for queries) and log4net (for logs) and would like to know how to interface subsonic3 with log4net so that log4net logs the underlying sql used by subsonic. This is what I have so far: public static IEnumerable<arma_ocorrencium> ListArmasOcorrencia() { if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: start"); } var db = new BdvdDB(); var select = from p in db.arma_ocorrencia select p; var results = select.ToList<arma_ocorrencium>(); //Execute the query here if (logger.IsInfoEnabled) { // log sql here } if (logger.IsInfoEnabled) { logger.Info("ListarArmasOcorrencia: end"); } return results; }

    Read the article

  • Query only the first detail record for each master record

    - by Neal S.
    If I have the following master-detail relationship: owner_tbl auto_tbl --------- -------- owner --- owner auto year And I have the following table data: owner_tbl auto_tbl --------- -------- john john, corvette, 1968 john, prius, 2008 james james, f-150, 2004 james, cadillac, 2002 james, accord, 2009 jeff jeff, tesla, 2010 jeff, hyundai, 1996 Now, I want to perform a query that returns the following result: john, corvette, 1968 jeff, hyundai, 1996 james, cadillac, 2002 The query should join the two tables, and sort all the records on the "year" field, but only return the first detail record for each master record. I know how to join the tables and sort on the "year" field, but it's not clear how (or if) I might be able to only retrieve the first joined record for each owner. Three related questions: Can I perform this kind of query using LINQ-to-SQL? Can I perform the query using T-SQL? Would it be best to just create a stored procedure for the query given its likely complexity?

    Read the article

  • Sql Server CE - Temporary disable auto increment on a specific collum

    - by Fábio Antunes
    Hi guys. I have this little question, thats been on my head for while now. Here it goes: Is it possible to temporary disable the Auto_Increment on the collum ID. So that i can add a new row to the table and being able specify the ID value when inserting the row. And then in the end enable the Auto_Increment again, and let do its work as usual? And if its possible how can i do it. The Table structure is very simple Collum Name (atributes) ID (Primary Key, Auto Increment, int, not null) Name (nvarchar(100), not null) Notice: The table name is: People. Lets also consider that the table already has data and cannot be changed. The database server is SQL Server CE. The SQL commands will be executed in a C# program, if its of any help. I really hope its possible, it would come very handy. Thanks

    Read the article

  • OpenDataSource fails pls help

    - by Vivek Chandraprakash
    I'm trying export records from SQL Server 2008 to mdb file using OpenDataSource. It works when I log in using Windows authentication. But it fails when I use SQL Server authentication. This is the error i get OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" returned message "Could not delete from specified tables.". Msg 7320, Level 16, State 2, Procedure EXPORT_Employee, Line 110 Cannot execute the query "DELETE FROM employee_export " against OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)". Thanks -Vivek

    Read the article

  • ASP .NET: SQL Server Money Type and .NET Currency Type

    - by Rudi Ramey
    MS SQL Server's Money Data Type seems to accept a well formatted currency value with no problem (example: $52,334.50) From my research MS SQL Sever just ignores the "$" and "," characters. ASP .NET has a parameter object that has a Type/DbType property and Currency is an available option to set as a value. However, when I set the parameter Type or DbType to currency it will not accept a value like $52,334.50. I receive an error "Input string was not in a correct format." when I try to Update/Insert. If I don't include the "$" or "," characters it seems to work fine. Also, if I don't specify the Type or DbType for the parameter it seems to work fine also. Is this just standard behavior that the parameter object with its Type set to currency will still reject "$" and "," characters in ASP .NET? Here's an example of the parameter declaration (in the .aspx page): <asp:Parameter Name="ImplementCost" DbType="Currency" />

    Read the article

< Previous Page | 285 286 287 288 289 290 291 292 293 294 295 296  | Next Page >