Search Results

Search found 8943 results on 358 pages for 'audit tables'.

Page 309/358 | < Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >

  • Deploying ASP.Net MVC application

    - by a_m0d
    I've recently reached the stage where an ASP.net MVC application I am developing is ready to be deployed to the production server. I've worked out how to publish the application - I've got all the files on the server, and can access them over the internet. However, I can't work out how to deploy my database. The server has the SQL Server Management Studio Express installed, as the database used is a SQL Server Express database. I have the server instance up and running - I just don't know how to add the tables, etc. to the database. I have created the "CREATE TABLE" scripts on the development machine, but as far as I can see, Management Studio does not provide any way to actually run these scripts. I have looked through all the menu items that I could see, and none of them worked. Even using the "Create new query..." option and pasting the script in didn't work. When I try "File-Open..." and select a script to run, set the correct database from the dropdown list on the toolbar, and then execute the script, it complains about not finding the database file (even when I set the USE [...] statement to the correct path. Deleting the USE [...] statement, the script complains that it can't find the [dbo].[Invoices] object; however, it shouldn't be able to find it, because its trying to create it! tl;dr: What's the best way to make sure that the database on the production machine matches the database on my development machine?

    Read the article

  • Constructor and Destructors in C++ [Not a question] [closed]

    - by Jack
    I am using gcc. Please tell me if I am wrong - Lets say I have two classes A & B class A { public: A(){cout<<"A constructor"<<endl;} ~A(){cout<<"A destructor"<<endl;} }; class B:public A { public: B(){cout<<"B constructor"<<endl;} ~B(){cout<<"B destructor"<<endl;} }; 1) The first line in B's constructor should be a call to A's constructor ( I assume compiler automatically inserts it). Also the last line in B's destructor will be a call to A's destructor (compiler does it again). Why was it built this way? 2) When I say A * a = new B(); compiler creates a new B object and checks to see if A is a base class of B and if it is it allows 'a' to point to the newly created object. I guess that is why we don't need any virtual constructors. ( with help from @Tyler McHenry , @Konrad Rudolph) 3) When I write delete a compiler sees that a is an object of type A so it calls A's destructor leading to a problem which is solved by making A's destructor virtual. As user - Little Bobby Tables pointed out to me all destructors have the same name destroy() in memory so we can implement virtual destructors and now the call is made to B's destructor and all is well in C++ land. Please comment.

    Read the article

  • Use Django ORM as standalone [closed]

    - by KeyboardInterrupt
    Possible Duplicates: Use only some parts of Django? Using only the DB part of Django I want to use the Django ORM as standalone. Despite an hour of searching Google, I'm still left with several questions: Does it require me to set up my Python project with a setting.py, /myApp/ directory, and modules.py file? Can I create a new models.py and run syncdb to have it automatically setup the tables and relationships or can I only use models from existing Django projects? There seems to be a lot of questions regarding PYTHONPATH. If you're not calling existing models is this needed? I guess the easiest thing would be for someone to just post a basic template or walkthrough of the process, clarifying the organization of the files e.g.: db/ __init__.py settings.py myScript.py orm/ __init__.py models.py And the basic essentials: # settings.py from django.conf import settings settings.configure( DATABASE_ENGINE = "postgresql_psycopg2", DATABASE_HOST = "localhost", DATABASE_NAME = "dbName", DATABASE_USER = "user", DATABASE_PASSWORD = "pass", DATABASE_PORT = "5432" ) # orm/models.py # ... # myScript.py # import models.. And whether you need to run something like: django-admin.py inspectdb ... (Oh, I'm running Windows if that changes anything regarding command-line arguments.).

    Read the article

  • Query broke down and left me stranded in the woods

    - by user1290323
    I am trying to execute a query that deletes all files from the images table that do not exist in the filters tables. I am skipping 3,500 of the latest files in the database as to sort of "Trim" the table back to 3,500 + "X" amount of records in the filters table. The filters table holds markers for the file, as well as the file id used in the images table. The code will run on a cron job. My Code: $sql = mysql_query("SELECT * FROM `images` ORDER BY `id` DESC") or die(mysql_error()); while($row = mysql_fetch_array($sql)){ $id = $row['id']; $file = $row['url']; $getId = mysql_query("SELECT `id` FROM `filter` WHERE `img_id` = '".$id."'") or die(mysql_error()); if(mysql_num_rows($getId) == 0){ $IdQue[] = $id; $FileQue[] = $file; } } for($i=3500; $i<$x; $i++){ mysql_query("DELETE FROM `images` WHERE id='".$IdQue[$i]."' LIMIT 1") or die("line 18".mysql_error()); unlink($FileQue[$i]) or die("file Not deleted"); } echo ($i-3500)." files deleted."; Output: 0 files deleted. Database contents: images table: 10,000 rows filters table: 63 rows Amount of rows in filters table that contain an images table id: 63 Execution time of php script: 4 seconds +/- 0.5 second Relevant DB structure TABLE: images id url etc... TABLE: filter id img_id (CONTAINS ID FROM images table) etc...

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • ASP.Net forms authentication - multiple providers

    - by Chris Klepeis
    I have an ASP.Net 4.0 application, and within it is a folder called "Forum", setup as a sub application in IIS 7. This forum package implements a custom provider for .net membership. The forum is running in .net 3.5. I'd like to setup the main site so that when users login, it logs them into both my site and the forum site. Both the main site and the forum have separate .Net membership tables. How can I specify which provider to use with formsauthentication? right now I have FormsAuthentication.SetAuthCookie(...); this, however, just uses my default provider and does nothing with the provider for the forum I tried setting the forum app and my web app to have the same cookie name, as well as setting the machinekey on each: <machineKey validationKey="AutoGenerate" validation="SHA1" /> no dice. I googled and didnt really come up with any example of how to use multiple providers like I want to. I updated my web.config to have both provideers but this is useless if I cannot specify in my code which one to use.

    Read the article

  • Stored Procedure Does Not Fire Last Command

    - by jp2code
    On our SQL Server (Version 10.0.1600), I have a stored procedure that I wrote. It is not throwing any errors, and it is returning the correct values after making the insert in the database. However, the last command spSendEventNotificationEmail (which sends out email notifications) is not being run. I can run the spSendEventNotificationEmail script manually using the same data, and the notifications show up, so I know it works. Is there something wrong with how I call it in my stored procedure? [dbo].[spUpdateRequest](@packetID int, @statusID int output, @empID int, @mtf nVarChar(50)) AS BEGIN -- SET NOCOUNT ON added to prevent extra result sets from -- interfering with SELECT statements. SET NOCOUNT ON; DECLARE @id int SET @id=-1 -- Insert statements for procedure here SELECT A.ID, PacketID, StatusID INTO #act FROM Action A JOIN Request R ON (R.ID=A.RequestID) WHERE (PacketID=@packetID) AND (StatusID=@statusID) IF ((SELECT COUNT(ID) FROM #act)=0) BEGIN -- this statusID has not been entered. Continue SELECT ID, MTF INTO #req FROM Request WHERE PacketID=@packetID WHILE (0 < (SELECT COUNT(ID) FROM #req)) BEGIN SELECT TOP 1 @id=ID FROM #req INSERT INTO Action (RequestID, StatusID, EmpID, DateStamp) VALUES (@id, @statusID, @empID, GETDATE()) IF ((@mtf IS NOT NULL) AND (0 < LEN(RTRIM(@mtf)))) BEGIN UPDATE Request SET MTF=@mtf WHERE ID=@id END DELETE #req WHERE ID=@id END DROP TABLE #req SELECT @id=@@IDENTITY, @statusID=StatusID FROM Action SELECT TOP 1 @statusID=ID FROM Status WHERE (@statusID<ID) AND (-1 < Sequence) EXEC spSendEventNotificationEmail @packetID, @statusID, 'http:\\cpweb:8100\NextStep.aspx' END ELSE BEGIN SET @statusID = -1 END DROP TABLE #act END Idea of how the data tables are connected:

    Read the article

  • Group / User based security. Table / SQL question

    - by Brett
    Hi, I'm setting up a group / user based security system. I have 4 tables as follows: user groups group_user_mappings acl where acl is the mapping between an item_id and either a group or a user. The way I've done the acl table, I have 3 columns of note (actually 4th one as an auto-id, but that is irrelevant) col 1 item_id (item to access) col 3 user_id (user that is allowed to access) col 3 group_id (group that is allowed to access) So for example item1, peter, , item2, , group1 item3, jane, , so either the acl will give access to a user or a group. Any one line in the ACL table with either have an item - user mapping, or an item group. If I want to have a query that returns all objects a user has access to, I think I need to have a SQL query with a UNION, because I need 2 separate queries that join like.. item - acl - group - user AND item - acl - user This I guess will work OK. Is this how its normally done? Am I doing this the right way? Seems a little messy. I was thinking I could get around it by creating a single user group for each person, so I only ever deal with groups in my SQL, but this seems a little messy as well..

    Read the article

  • What's the most efficient query?

    - by Aaron Carlino
    I have a table named Projects that has the following relationships: has many Contributions has many Payments In my result set, I need the following aggregate values: Number of unique contributors (DonorID on the Contribution table) Total contributed (SUM of Amount on Contribution table) Total paid (SUM of PaymentAmount on Payment table) Because there are so many aggregate functions and multiple joins, it gets messy do use standard aggregate functions the the GROUP BY clause. I also need the ability to sort and filter these fields. So I've come up with two options: Using subqueries: SELECT Project.ID AS PROJECT_ID, (SELECT SUM(PaymentAmount) FROM Payment WHERE ProjectID = PROJECT_ID) AS TotalPaidBack, (SELECT COUNT(DISTINCT DonorID) FROM Contribution WHERE RecipientID = PROJECT_ID) AS ContributorCount, (SELECT SUM(Amount) FROM Contribution WHERE RecipientID = PROJECT_ID) AS TotalReceived FROM Project; Using a temporary table: DROP TABLE IF EXISTS Project_Temp; CREATE TEMPORARY TABLE Project_Temp (project_id INT NOT NULL, total_payments INT, total_donors INT, total_received INT, PRIMARY KEY(project_id)) ENGINE=MEMORY; INSERT INTO Project_Temp (project_id,total_payments) SELECT `Project`.ID, IFNULL(SUM(PaymentAmount),0) FROM `Project` LEFT JOIN `Payment` ON ProjectID = `Project`.ID GROUP BY 1; INSERT INTO Project_Temp (project_id,total_donors,total_received) SELECT `Project`.ID, IFNULL(COUNT(DISTINCT DonorID),0), IFNULL(SUM(Amount),0) FROM `Project` LEFT JOIN `Contribution` ON RecipientID = `Project`.ID GROUP BY 1 ON DUPLICATE KEY UPDATE total_donors = VALUES(total_donors), total_received = VALUES(total_received); SELECT * FROM Project_Temp; Tests for both are pretty comparable, in the 0.7 - 0.8 seconds range with 1,000 rows. But I'm really concerned about scalability, and I don't want to have to re-engineer everything as my tables grow. What's the best approach?

    Read the article

  • ruby on rails one-to-many relationship

    - by fenec
    I would like to model a betting system relationship using the power of rails. so lets start with doing something very simple modelling the relationship from a user to a bet.i would like to have a model bet with 2 primary keys. here are my migrations enter code here class CreateBets < ActiveRecord::Migration def self.up create_table :bets do |t| t.integer :user_1_id t.integer :user_2_id t.integer :amount t.timestamps end end def self.down drop_table :bets end end class CreateUsers < ActiveRecord::Migration def self.up create_table :users do |t| t.string :name t.timestamps end end def self.down drop_table :users end end the models enter code here class Bet < ActiveRecord::Base belongs_to :user_1,:class_name=:User belongs_to :user_2,:class_name=:User end class User < ActiveRecord::Base has_many :bets, :foreign_key =:user_1) has_many :bets, :foreign_key =:user_2) end when i test here in the console my relationships I got an error enter code here u1=User.create :name="aa" = # u2=User.create :name="bb" = # b=Bet.create(:user_1=u1,:user_2=u2) *****error***** QUESTIONS: 1 How do I define the relationships between these tables correctly? 2 are there any conventions to name the attributes (ex:user_1_id...) thank you for your help

    Read the article

  • Setting up relations/mappings for a SQLAlchemy many-to-many database

    - by Brent Ramerth
    I'm new to SQLAlchemy and relational databases, and I'm trying to set up a model for an annotated lexicon. I want to support an arbitrary number of key-value annotations for the words which can be added or removed at runtime. Since there will be a lot of repetition in the names of the keys, I don't want to use this solution directly, although the code is similar. My design has word objects and property objects. The words and properties are stored in separate tables with a property_values table that links the two. Here's the code: from sqlalchemy import Column, Integer, String, Table, create_engine from sqlalchemy import MetaData, ForeignKey from sqlalchemy.orm import relation, mapper, sessionmaker from sqlalchemy.ext.declarative import declarative_base engine = create_engine('sqlite:///test.db', echo=True) meta = MetaData(bind=engine) property_values = Table('property_values', meta, Column('word_id', Integer, ForeignKey('words.id')), Column('property_id', Integer, ForeignKey('properties.id')), Column('value', String(20)) ) words = Table('words', meta, Column('id', Integer, primary_key=True), Column('name', String(20)), Column('freq', Integer) ) properties = Table('properties', meta, Column('id', Integer, primary_key=True), Column('name', String(20), nullable=False, unique=True) ) meta.create_all() class Word(object): def __init__(self, name, freq=1): self.name = name self.freq = freq class Property(object): def __init__(self, name): self.name = name mapper(Property, properties) Now I'd like to be able to do the following: Session = sessionmaker(bind=engine) s = Session() word = Word('foo', 42) word['bar'] = 'yes' # or word.bar = 'yes' ? s.add(word) s.commit() Ideally this should add 1|foo|42 to the words table, add 1|bar to the properties table, and add 1|1|yes to the property_values table. However, I don't have the right mappings and relations in place to make this happen. I get the sense from reading the documentation at http://www.sqlalchemy.org/docs/05/mappers.html#association-pattern that I want to use an association proxy or something of that sort here, but the syntax is unclear to me. I experimented with this: mapper(Word, words, properties={ 'properties': relation(Property, secondary=property_values) }) but this mapper only fills in the foreign key values, and I need to fill in the other value as well. Any assistance would be greatly appreciated.

    Read the article

  • Organizing Eager Queries in an ObjectContext

    - by Nix
    I am messing around with Entity Framework 3.5 SP1 and I am trying to find a cleaner way to do the below. I have an EF model and I am adding some Eager Loaded entities and i want them all to reside in the "Eager" property in the context. We originally were just changing the entity set name, but it seems a lot cleaner to just use a property, and keep the entity set name in tact. Example: Context - EntityType - AnotherType - Eager (all of these would have .Includes to pull in all assoc. tables) - EntityType - AnotherType Currently I am using composition but I feel like there is an easier way to do what I want. namespace Entities{ public partial class TestObjectContext { EagerExtensions Eager { get;set;} public TestObjectContext(){ Eager = new EagerExtensions (this); } } public partial class EagerExtensions { TestObjectContext context; public EagerExtensions(TestObjectContext _context){ context = _context; } public IQueryable<TestEntity> TestEntity { get { return context.TestEntity .Include("TestEntityType") .Include("Test.Attached.AttachedType") .AsQueryable(); } } } } public class Tester{ public void ShowHowIWantIt(){ TestObjectContext context= new TestObjectContext(); var query = from a in context.Eager.TestEntity select a; } }

    Read the article

  • Cache data in SQL CE database

    - by user93422
    Background I have an SQL CE database, that is constantly updated (every second). I have a (web) application that allows a user to look at the data in real-time. At some point a user can click "take a snapshot" button, and it will open the snapshot in a different window. And then on that form, there is "print" and "download" buttons that will either generate a page for printing, or will stream the data as CSV file - but same data snapshot has to be used, i.e. I can't go to the DB to get latest data for that. Details SQL CE dabatase is exposed through WCF web service. Snapshot consists of up to 500 records, 10 columns each. Expiration time on the snapshot of 2 hours is sufficient. It is a low-traffic application, so I don't expect more than few (5) connections at the same time. Loosing snapshot is not a big deal, user can simply generate new one. database is accessed by self-hosted WCF web service using Linq-to-SQL. Web site is ASP.NET MVC hosted on UltiDev Cassini. database, and web site are most likely be on the same box, when deployed. The entire app is intranet bound. Problem I need to cache the snapshot of the data at the moment user pressed "take a snapshot" button, so that I can use same data to generate print page, or generate a file for download. Solution 1: Each time there is a need to generate a snapshot, I will create a table in the database. Since there are no temp tables in SQL CE, I will need to clean it up myself. Solution 2: Cache the snapshot in-memory on either DB server, or web server. Question: Is there anything wrong with proposed solutions? Any different solution suggestions?

    Read the article

  • Reading/Writing DataTables to and from an OleDb Database LINQ

    - by jsmith
    My current project is to take information from an OleDbDatabase and .CSV files and place it all into a larger OleDbDatabase. I have currently read in all the information I need from both .CSV files, and the OleDbDatabase into DataTables.... Where it is getting hairy is writing all of the information back to another OleDbDatabase. Right now my current method is to do something like this: OleDbTransaction myTransaction = null; try { OleDbConnection conn = new OleDbConnection("PROVIDER=Microsoft.Jet.OLEDB.4.0;" + "Data Source=" + Database); conn.Open(); OleDbCommand command = conn.CreateCommand(); string strSQL; command.Transaction = myTransaction; strSQL = "Insert into TABLE " + "(FirstName, LastName) values ('" + FirstName + "', '" + LastName + "')"; command.CommandType = CommandType.Text; command.CommandText = strSQL; command.ExecuteNonQuery(); conn.close(); catch (Exception) { // IF invalid data is entered, rolls back the database myTransaction.Rollback(); } Of course, this is very basic and I'm using an SQL command to commit my transactions to a connection. My problem is I could do this, but I have about 200 fields that need inserted over several tables. I'm willing to do the leg work if that's the only way to go. But I feel like there is an easier method. Is there anything in LINQ that could help me out with this?

    Read the article

  • Lookup table size reduction

    - by Ryan
    Hello: I have an application in which I have to store a couple of millions of integers, I have to store them in a Look up table, obviously I cannot store such amount of data in memory and in my requirements I am very limited I have to store the data in an embebedded system so I am very limited in the space, so I would like to ask you about recommended methods that I can use for the reduction of the look up table. I cannot use function approximation such as neural networks, the values needs to be in a table. The range of the integers is not known at the moment. When I say integers I mean a 32 bit value. Basically the idea is use some copmpression method to reduce the amount of memory but without losing many precision. This thing needs to run in hardware so the computation overhead cannot be very high. In my algorithm I have to access to one value of the table do some operations with it and after update the value. In the end what I should have is a function which I pass an index to it and then I get a value, and after I have to use another function to write a value in the table. I found one called tile coding http://www.cs.ualberta.ca/~sutton/book/8/node6.html, this one is based on several look up tables, does anyone know any other method?. Thanks.

    Read the article

  • How can I make this SQL query more efficient? PHP.

    - by Alan Grant
    Hi all, I have a system whereby a user can view categories that they've subscribed to individually, and also those that are available in the region they belong in by default. So, the tables are as follows: Categories UsersCategories RegionsCategories I'm querying the db for all the categories within their region, and also all the individual categories that they've subscribed to. My query is as follows: Select * FROM (categories c) LEFT JOIN users_categories uc on uc.category_id = c.id LEFT JOIN regions_categories rc on rc.category_id = c.id WHERE (rc.region_id = ? OR uc.user_id = ?) At least I believe that's the query, I'm creating it using Cake's ORM layer, so the exact one is: $conditions = array( array( "OR" => array ( 'RegionsCategories.region_id' => $region_id, 'UsersCategories.user_id' => $user_id ) )); $this->find('all', $conditions); This turns out to be incredibly slow (sometimes around 20 seconds or so. Each table has around 5,000 rows). Is my design at fault here? How can I retrieve both the users' individual categories and those within their region all in one query without it taking ages? Thanks!

    Read the article

  • best practice - loging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • Best practice - logging events (general) and changes (database)

    - by b0x0rz
    need help with logging all activities on a site as well as database changes. requirements: * should be in database * should be easily searchable by initiator (user name / session id), event (activity type) and event parameters i can think of a database design but either it involves a lot of tables (one per event) so i can log each of the parameters of an event in a separate field OR it involves one table with generic fields (7 int numeric and 7 text types) and log everything in one table with event type field determining what parameter got written where (and hoping that i don't need more than 7 fields of a certain type, or 8 or 9 or whatever number i choose)... example of entries (the usual things): [username] login failed @datetime [username] login successful @datetime [username] changed password @datetime, estimated security of password [low/ok/high/perfect] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] clicked result [result number] [result id] after searching for [search string] and got [number of results] @datetime [username] changed profile name from [old name] to [new name] @datetime [username] verified name with [credit card type] credit card @datetime datbase table [table name] purged of old entries @datetime via automated process etc... so anyone dealt with this before? any best practices / links you can share? i've seen it done with the generic solution mentioned above, but somehow that goes against what i learned from database design, but as you can see the sheer number of events that need to be trackable (each user will be able to see this info) is giving me headaches, BUT i do LOVE the one event per table solution more than the generic one. any thoughts? edit: also, is there maybe an authoritative list of such (likely) events somewhere? thnx stack overflow says: the question you're asking appears subjective and is likely to be closed. my answer: probably is subjective, but it is directly related to my issue i have with designing a database / writing my code, so i'd welcome any help. also i tried narrowing down the ideas to 2 so hopefully one of these will prevail, unless there already is an established solution for these kinds of things.

    Read the article

  • LinqToSQL not updating database

    - by codegarten
    Hi. I created a database and dbml in visual studio 2010 using its wizards. Everything was working fine until i checked the tables data (also in visual studio server explorer) and none of my updates were there. using (var context = new CenasDataContext()) { context.Log = Console.Out; context.Cenas.InsertOnSubmit(new Cena() { id = 1}); context.SubmitChanges(); } This is the code i am using to update my database. At this point my database has one table with one field (PK) named ID. *INSERT INTO [dbo].Cenas VALUES (@p0) -- @p0: Input Int (Size = -1; Prec = 0; Scale = 0) [1] -- Context: SqlProvider(Sql2008) Model: AttributedMetaModel Build: 4.0.30319.1* This is LOG from the execution (printed the context log into the console). The problem i'm having is that these updates are not persistent in the database. I mean that when i query my database (visual studio server explorer - new query) i see the table is empty, every time. I am using a SQL Server database file (.mdf).

    Read the article

  • Performance impact when using XML columns in a table with MS SQL 2008

    - by Sam Dahan
    I am using a simple table with 6 columns, 3 of which are of XML type, not schema-constrained. When the table reaches a size around 120,000 or 150,000 rows, I see a dramatic performance cost in doing any query in the table. For comparison, I have another table, which grows in size at about the same rate, but only contain scalar types (int, datetime, a few float columns). That table performs perfectly fine even after 200,000 rows. And by the way, I am not using XQuery on the xml columns, i am only using regular SQL query statements. Some specifics: both tables contain a DateTime field called SampleTime. a statement like (it's in a stored procedure but I show you the actual statement) SELECT MAX(sampleTime) SampleTime FROM dbo.MyRecords WHERE PlacementID=@somenumber takes 0 seconds on the table without xml columns, and anything from 13 to 20 seconds on the table with XML columns. That depends on which drive I set my database on. At the moment it sits on a different spindle (not C:) and it takes 13 seconds. Has anyone seen this behavior before, or have any hint at what I am doing wrong? I tried this with SQL 2008 EXPRESS and the full-blown SQL Server 2008, that made no difference. Oh, one last detail: I am doing this from a C# application, .NET 3.5, using SqlConnection, SqlReader, etc.. I'd appreciate some insight into that, thanks! Sam

    Read the article

  • How to reference using Entity Framework and Asp.Net Mvc 2

    - by Picflight
    Tables CREATE TABLE [dbo].[Users]( [UserId] [int] IDENTITY(1,1) NOT NULL, [UserName] [varchar](50) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [Email] [varchar](255) COLLATE SQL_Latin1_General_CP1_CI_AS NULL, [BirthDate] [smalldatetime] NULL, [CountryId] [int] NULL, CONSTRAINT [PK_Users] PRIMARY KEY CLUSTERED ([UserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] CREATE TABLE [dbo].[TeamMember]( [UserId] [int] NOT NULL, [TeamMemberUserId] [int] NOT NULL, [CreateDate] [smalldatetime] NOT NULL CONSTRAINT [DF_TeamMember_CreateDate] DEFAULT (getdate()), CONSTRAINT [PK_TeamMember] PRIMARY KEY CLUSTERED ([UserId] ASC, [TeamMemberUserId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] dbo.TeamMember has both UserId and TeamMemberUserId as the index key. My goal is to show a list of Users on my View. In the list I want to flag, or highlight the Users that are Team Members of the LoggedIn user. My ViewModel public class UserViewModel { public int UserId { get; private set; } public string UserName { get; private set; } public bool HighLight { get; private set; } public UserViewModel(Users users, bool highlight) { this.UserId = users.UserId; this.UserName = users.UserName; this.HighLight = highlight; } } View <%@ Page Title="" Language="C#" MasterPageFile="~/Views/Shared/Site.Master" Inherits="System.Web.Mvc.ViewPage<MvcPaging.IPagedList<MyProject.Mvc.Models.UserViewModel>>" %> <% foreach (var item in Model) { %> <%= item.UserId %> <%= item.UserName %> <%if (item.HighLight) { %> Team Member <% } else { %> Not Team Member <% } %> How do I toggle the TeamMember or Not If I add dbo.TeamMember to the EDM, there are no relationships on this table, how will I wire it to Users object? So I am comparing the LoggedIn UserId with this list(SELECT TeamMemberUserId FROM TeamMember WHERE UserId = @LoggedInUserId)

    Read the article

  • database structure

    - by jindalsyogesh
    I have a table named ActivityRecording. This table currently has 500,000 records. I need to add a lot of new inputs that relates to activityrecording table. The relation of activityrecording with these new input fields is 1 to 0,1. So, what's going to happen on screen is when user fills the ActivityRecording data, he will then be taken to a new page and this page will show a form based on the user's input (from a dropdown named service) in activityrecording. There will 6 different kinds of form (each form will have 7-8 inputs which includes textareas of size 5kb, textboxes and checkboxes). So, for one activityrecording user will fill one out of 6 forms. There are two ways I know (there could be more), I can design the data structure: Add all the inputs from all these 6 forms into the activityrecording table. So, columns belonging to 5 of these forms will be null in this table, only columns belonging to one of the forms will have values The other way would be add 6 new tables (one for each form) and add 6 foreign key columns to activityrecording table. So, out of 6 foreign keys, 5 will be null and one will actually point to a table Which approach is a better data structure design? Please take into consideration that number of rows in this table are 500,000 and are expected to grow at a faster rate now.

    Read the article

  • Moving information between databases

    - by williamjones
    I'm on Postgres, and have two databases on the same machine, and I'd like to move some data from database Source to database Dest. Database Source: Table User has a primary key Table Comments has a primary key Table UserComments is a join table with two foreign keys for User and Comments Dest looks just like Source in structure, but already has information in User and Comments tables that needs to be retained. I'm thinking I'll probably have to do this in a few steps. Step 1, I would dump Source using the Postgres Copy command. Step 2, In Dest I would add a temporary second_key column to both User and Comments, and a new SecondUserComments join table. Step 3, I would import the dumped file into Dest using Copy again, with the keys input into the second_key columns. Step 4, I would add rows to UserComments in Dest based on the contents of SecondUserComments, only using the real primary keys this time. Could this be done with a SQL command or would I need a script? Step 5, delete the SecondUserComments table and remove the second_key columns. Does this sound like the best way to do this, or is there a better way I'm overlooking?

    Read the article

  • World's Most Challening MySQL SQL Query (least I think so...)

    - by keruilin
    Whoever answers this question can claim credit for solving the world's most challenging SQL query, according to yours truly. Working with 3 tables: users, badges, awards. Relationships: user has many awards; award belongs to user; badge has many awards; award belongs to badge. So badge_id and user_id are foreign keys in the awards table. The business logic at work here is that every time a user wins a badge, he/she receives it as an award. A user can be awarded the same badge multiple times. Each badge is assigned a designated point value (point_value is a field in the badges table). For example, BadgeA can be worth 500 Points, BadgeB 1000 Points, and so on. As further example, let's say UserX won BadgeA 10 times and BadgeB 5 times. BadgeA being worth 500 Points, and BadgeB being worth 1000 Points, UserX has accumulated a total of 10,000 Points ((10 x 500) + (5 x 1000)). The end game here is to return a list of top 50 users who have accumulated the most badge points. Can you do it?

    Read the article

  • SQL Server INSERT ... SELECT Statement won't parse

    - by Jim Barnett
    I am getting the following error message with SQL Server 2005 Msg 120, Level 15, State 1, Procedure usp_AttributeActivitiesForDateRange, Line 18 The select list for the INSERT statement contains fewer items than the insert list. The number of SELECT values must match the number of INSERT columns. I have copy and pasted the select list and insert list into excel and verified there are the same number of items in each list. Both tables an additional primary key field with is not listed in either the insert statement or select list. I am not sure if that is relevant, but suspicious it may be. Here is the source for my stored procedure: CREATE PROCEDURE [dbo].[usp_AttributeActivitiesForDateRange] ( @dtmFrom DATETIME, @dtmTo DATETIME ) AS BEGIN SET NOCOUNT ON; DECLARE @dtmToWithTime DATETIME SET @dtmToWithTime = DATEADD(hh, 23, DATEADD(mi, 59, DATEADD(s, 59, @dtmTo))); -- Get uncontested DC activities INSERT INTO AttributedDoubleClickActivities ([Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID], [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], Ordinal, [Click-Time], [Event-ID]) SELECT [Time], [User-ID], [IP], [Advertiser-ID], [Buy-ID], [Ad-ID], [Ad-Jumpto], [Creative-ID], [Creative-Version], [Creative-Size-ID], [Site-ID], [Page-ID], [Country-ID], [State Province], [Areacode], [OS-ID], [Domain-ID], [Keyword], [Local-User-ID] [Activity-Type], [Activity-Sub-Type], [Quantity], [Revenue], [Transaction-ID], [Other-Data], REPLACE(Ordinal, '?', '') AS Ordinal, [Click-Time], [Event-ID] FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo AND REPLACE(Ordinal, '?', '') IN (SELECT REPLACE(Ordinal, '?', '') FROM Activity_Reports WHERE [Time] BETWEEN @dtmFrom AND @dtmTo EXCEPT SELECT CONVERT(VARCHAR, TripID) FROM VisualSciencesActivities WHERE [Time] BETWEEN @dtmFrom AND @dtmTo); END GO

    Read the article

< Previous Page | 305 306 307 308 309 310 311 312 313 314 315 316  | Next Page >