Search Results

Search found 49224 results on 1969 pages for 'database mirroring sql se'.

Page 354/1969 | < Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >

  • Database storage for high sample rate data in web app

    - by Jim
    I've got multiple sensors feeding data to my web app. Each channel is 5 samples per second and the data gets uploaded bundled together in 1 minute json messages (containing 300 samples). The data will be graphed using flot at multiple zoom levels from 1 day to 1 minute. I'm using Amazon SimpleDB and I'm currently storing the data in the 1 minute chunks that I receive it in. This works well for high zoom levels, but for full days there will be simply be too many rows to retrieve. The idea I've currently got is that every hour I can crawl through the data and collect together 300 samples for the last hour and store them in an hour Domain (table if you like). Does this sound like a reasonable solution? How have others implemented the same sort of systems?

    Read the article

  • Arrays in database tables and normalization

    - by Ivan Petrov
    Hi! Is it smart to keep arrays in table columns? More precisely I am thinking of the following schema which to my understanding violates normalization: create table Permissions( GroupID int not null default(-1), CategoryID int not null default(-1), Permissions varchar(max) not null default(''), constraint PK_GroupCategory primary key clustered(GroupID,CategoryID) ); and this: create table Permissions( GroupID int not null default(-1), CategoryID int not null default(-1), PermissionID int not null default(-1), constraint PK_GroupCategory primary key clustered(GroupID,CategoryID) ); UPD: Forgot to mention, in the scope of this concrete question we will consider that the "fetch rows that have permission X" won't be performed, instead all the lookups will be made by GroupID and CategoryID only Thoughts? Thanks in advance!

    Read the article

  • Advice Please: SQL Server Identity vs Unique Identifier keys when using Entity Framework

    - by c.batt
    I'm in the process of designing a fairly complex system. One of our primary concerns is supporting SQL Server peer-to-peer replication. The idea is to support several geographically separated nodes. A secondary concern has been using a modern ORM in the middle tier. Our first choice has always been Entity Framework, mainly because the developers like to work with it. (They love the LiNQ support.) So here's the problem: With peer-to-peer replication in mind, I settled on using uniqueidentifier with a default value of newsequentialid() for the primary key of every table. This seemed to provide a good balance between avoiding key collisions and reducing index fragmentation. However, it turns out that the current version of Entity Framework has a very strange limitation: if an entity's key column is a uniqueidentifier (GUID) then it cannot be configured to use the default value (newsequentialid()) provided by the database. The application layer must generate the GUID and populate the key value. So here's the debate: abandon Entity Framework and use another ORM: use NHibernate and give up LiNQ support use linq2sql and give up future support (not to mention get bound to SQL Server on DB) abandon GUIDs and go with another PK strategy devise a method to generate sequential GUIDs (COMBs?) at the application layer I'm leaning towards option 1 with linq2sql (my developers really like linq2[stuff]) and 3. That's mainly because I'm somewhat ignorant of alternate key strategies that support the replication scheme we're aiming for while also keeping things sane from a developer's perspective. Any insight or opinion would be greatly appreciated.

    Read the article

  • Database design: Using hundred of fields for little values

    - by user964260
    I'm planning to develop a PHP Web App, it will mainly be used by registered users(sessions) While thinking about the DB design, I was contemplating that in order to give the best user experience possible there would be lots of options for the user to activate, deactivate, specify, etc. For example: - Options for each layout elements, dialog boxes, dashboard, grid, etc. - color, size, stay visible, invisible, don't ask again, show everytime, advanced mode, simple mode, etc. This would get like 100s of fields ranging from simple Yes/No or 1 to N values..., for each user. So, is it having a field for each of these options the way to go? or how do those CRMs or CMS or other Web Apps do it to store lots of 1-2 char long values? Do they group them on Text fields separated by a special char and then "explode" them as an array for runtime usage? thank you

    Read the article

  • php java in memory database

    - by msaif
    i need to load data as array to memory in PHP.but in PHP if i write $array= array("1","2"); in test.php then this $array variable is initialized every time user requests.if we request test.php 100 times by clicking 100 times browser refresh button then this $array variable will be executed 100 times. but i need to execute the $array variable only one time for first time request and subsequent request of test.php must not execute the $array variable.but only use that memory location.how can i do that in PHP. but in JAVA SEVRVLET it is easy to execute,just write the $array variable in one time execution of init() method of servlet lifecycle method and subsequent request of that servlet dont execute init() method but service() method but service() method always uses that $array memeory location. all i want to initilize $array variable once but use that memory loc from subsequent request in PHP.is there any possiblity in PHP?

    Read the article

  • SQL Server 2008 spatial index and CPU utilization with MapGuide Open Source 2.1

    - by Antonio de la Peña
    I have a SQL Server table with hundreds of thousands of geometry type parcels. I have made indexes on them trying different combinations of density and objects per cell settings. So far I'm settiling for LOW, LOW, MEDIUM, MEDIUM and 16 objects per cell and I made a SP that sets the bounding box according to the extents of the entities in the table. There is an incredible performance boost from queries taking almost minutes without index to less than seconds, it gets faster when the zoom is closer thus less objects are displayed. Yet the CPU utilization gets to 100% when querying for features, even when the queries themselves are fast. I'm worrying this will not fly in a production environment. I am using MapGuide Open Source 2.1 for this project, but I am positive the CPU load is caused by SQL Server. I wonder if my indexes are set properly. I haven't found any clear documentation on how to properly set them up. Every article I've read basically says "it depends..." but nothing specific. Do you have any recommendations for me, including books, articles? Thank you.

    Read the article

  • Two seperate tm structs mirroring each other

    - by BSchlinker
    Here is my current situation: I have two tm structs, both set to the current time I make a change to the hour in one of the structs The change is occurring in the other struct magically.... How do I prevent this from occurring? I need to be able to compare and know the number of seconds between two different times -- the current time and a time in the future. I've been using difftime and mktime to determine this. I recognize that I don't technically need two tm structs (the other struct could just be a time_t loaded with raw time) but I'm still interested in understanding why this occurs. void Tracker::monitor(char* buffer){ // time handling time_t systemtime, scheduletime, currenttime; struct tm * dispatchtime; struct tm * uiuctime; double remainingtime; // let's get two structs operating with current time dispatchtime = dispatchtime_tm(); uiuctime = uiuctime_tm(); // set the scheduled parameters dispatchtime->tm_hour = 5; dispatchtime->tm_min = 05; dispatchtime->tm_sec = 14; uiuctime->tm_hour = 0; // both of these will now print the same time! (0:05:14) // what's linking them?? // print the scheduled time printf ("Current Time : %2d:%02d:%02d\n", uiuctime->tm_hour, uiuctime->tm_min, uiuctime->tm_sec); printf ("Scheduled Time : %2d:%02d:%02d\n", dispatchtime->tm_hour, dispatchtime->tm_min, dispatchtime->tm_sec); } struct tm* Tracker::uiuctime_tm(){ time_t uiucTime; struct tm *ts_uiuc; // give currentTime the current time time(&uiucTime); // change the time zone to UIUC putenv("TZ=CST6CDT"); tzset(); // get the localtime for the tz selected ts_uiuc = localtime(&uiucTime); // set back the current timezone unsetenv("TZ"); tzset(); // set back our results return ts_uiuc; } struct tm* Tracker::dispatchtime_tm(){ time_t currentTime; struct tm *ts_dispatch; // give currentTime the current time time(&currentTime); // get the localtime for the tz selected ts_dispatch = localtime(&currentTime); // set back our results return ts_dispatch; }

    Read the article

  • How to use a variable to specify filegroup in MSSQL

    - by gt
    I want to alter a table to add a constraint during upgrade on a MSSQL database. This table is normally indexed on a filegroup called 'MY_INDEX' - but may also be on a database without this filegroup. In this case I want the indexing to be done on the 'PRIMARY' filegroup. I tried the following code to achieve this: DECLARE @fgName AS VARCHAR(10) SET @fgName = CASE WHEN EXISTS(SELECT groupname FROM sysfilegroups WHERE groupname = 'MY_INDEX') THEN QUOTENAME('MY_INDEX') ELSE QUOTENAME('PRIMARY') END ALTER TABLE [dbo].[mytable] ADD CONSTRAINT [PK_mytable] PRIMARY KEY ( [myGuid] ASC ) ON @fgName -- fails: 'incorrect syntax' However, the last line fails as it appears a filegroup cannot be specified by variable. Is this possible?

    Read the article

  • SQL Server Import table keeping default values

    - by Chrissi
    I am importing a table from one database to another in SQL Server 2008 by right-clicking the target database and choosing Tasks Import Data... When I import the table I get the column names and types and all the data fine, but I lose the primary key, identity specifications and all the default values that were set in the source table. So now I have to set all the default values for each column again manually. Is there any way to get the default values with the import, or even after with a Query? I am VERY new to this and flailing in the dark, so forgive me if this is a really stupid question...

    Read the article

  • database structure

    - by jindalsyogesh
    I have a table named ActivityRecording. This table currently has 500,000 records. I need to add a lot of new inputs that relates to activityrecording table. The relation of activityrecording with these new input fields is 1 to 0,1. So, what's going to happen on screen is when user fills the ActivityRecording data, he will then be taken to a new page and this page will show a form based on the user's input (from a dropdown named service) in activityrecording. There will 6 different kinds of form (each form will have 7-8 inputs which includes textareas of size 5kb, textboxes and checkboxes). So, for one activityrecording user will fill one out of 6 forms. There are two ways I know (there could be more), I can design the data structure: Add all the inputs from all these 6 forms into the activityrecording table. So, columns belonging to 5 of these forms will be null in this table, only columns belonging to one of the forms will have values The other way would be add 6 new tables (one for each form) and add 6 foreign key columns to activityrecording table. So, out of 6 foreign keys, 5 will be null and one will actually point to a table Which approach is a better data structure design? Please take into consideration that number of rows in this table are 500,000 and are expected to grow at a faster rate now.

    Read the article

  • Entity Framework 4 and SQL Compact 4: How to generate database?

    - by David Veeneman
    I am developing an app with Entity Framework 4 and SQL Compact 4, using a Model First approach. I have created my EDM, and now I want to generate a SQL Compact 4.0 database to act as a data store for the model. I bring up the Generate Database Wizard and click the New Connection button to create a connection for the generated file. The Choose Data Source dialog appears, but SQL Compact 4.0 is not listed in the list of available data sources: I am running VS 2010 SP1 (beta) and I have installed the VS 2010 Tools for SQL Compact 4.0. I can create a SQL Compact 4.0 data connection from the Server Explorer. It is only in the Generate Database Wizard that the 4.0 option doesn't appear. BTW, my SQL Compact 4.0 installation does include System.Data.SqlServerCe.Entity.dll. So I should have the SQL Compact components I need. Am I doing something incorrectly, or is this a bug? Does anyone have a fix or a workaround? Thanks for your help.

    Read the article

  • sql statement supposed to have 2 distinct rows, but only 1 is returned

    - by jello
    I have an sql statement that is supposed to return 2 rows. the first with psychological_id = 1, and the second, psychological_id = 2. here is the sql statement select * from psychological where patient_id = 12 and symptom = 'delire'; But with this code, with which I populate an array list with what is supposed to be 2 different rows, two rows exist, but with the same values: the second row. OneSymptomClass oneSymp = new OneSymptomClass(); ArrayList oneSympAll = new ArrayList(); string connStrArrayList = "Data Source=.\\SQLEXPRESS;AttachDbFilename=|DataDirectory|\\PatientMonitoringDatabase.mdf; " + "Initial Catalog=PatientMonitoringDatabase; " + "Integrated Security=True"; string queryStrArrayList = "select * from psychological where patient_id = " + patientID.patient_id + " and symptom = '" + SymptomComboBoxes[tag].SelectedItem + "';"; using (var conn = new SqlConnection(connStrArrayList)) using (var cmd = new SqlCommand(queryStrArrayList, conn)) { conn.Open(); using (SqlDataReader rdr = cmd.ExecuteReader()) { while (rdr.Read()) { oneSymp.psychological_id = Convert.ToInt32(rdr["psychological_id"]); oneSymp.patient_history_date_psy = (DateTime)rdr["patient_history_date_psy"]; oneSymp.strength = Convert.ToInt32(rdr["strength"]); oneSymp.psy_start_date = (DateTime)rdr["psy_start_date"]; oneSymp.psy_end_date = (DateTime)rdr["psy_end_date"]; oneSympAll.Add(oneSymp); } } conn.Close(); } OneSymptomClass testSymp = oneSympAll[0] as OneSymptomClass; MessageBox.Show(testSymp.psychological_id.ToString()); the message box outputs "2", while it's supposed to output "1". anyone got an idea what's going on?

    Read the article

  • 'Out of Memory exception' in sql server 2005 xml column

    - by Raghuraman
    Hi All, I am devloping a windows forms application and am using sql server 2005 database as my backend. I am having an xml column in my database. I am using ultrawingrid control in my application.I obtain the xml of the dataset which is bound to my ultrawingrid control and pass that as a parameter value to the stored procedure where am inserting this value into the xml column which I specified. The columns in my grid are dynamic and hence there can be any no of columns in my grid. I got 'out of memory' exception in the dataset.GetXml() statement since there were more no of columns I believe.So, what I did is that I used dataset.WriteXml() method and stored all the xml contents into an xml file, loaded the xml file into the XmlDocument object and then passed the xmlnodereader as the value to the stored procedure parameter.Now, while executing the stored procedure am getting the same 'out of memory' exception. How could I resolve this issue?

    Read the article

  • How to implement Auto_Increment per User, on the same table?

    - by Jonas
    I would like to have multiple users that share the same tables in the database, but have one auto_increment value per user. I will use an embedded database, JavaDB and as what I know it doesn't support this functionality. How can I implement it? Should I implement a trigger on inserts that lookup the users last inserted row, and then add one, or are there any better alternative? Or is it better to implement this in the application code? Or is this just a bad idea? I think this is easier to maintain than creating new tables for every user. Example: table +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 3 | 1 | 3 | 6788 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 3 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 3 | 1 | 3 | 6788 | | 6 | 2 | 3 | 4366 | | 7 | 3 | 3 | 7887 | +----+-------------+---------+------+ SELECT * FROM table WHERE USER_ID = 2 +----+-------------+---------+------+ | ID | ID_PER_USER | USER_ID | DATA | +----+-------------+---------+------+ | 1 | 1 | 2 | 3454 | | 2 | 2 | 2 | 6567 | | 4 | 3 | 2 | 1133 | | 5 | 4 | 2 | 4534 | +----+-------------+---------+------+

    Read the article

  • Rails: Auto-Detecting Database Adapter

    - by Dex
    The new version of the ar-extensions gem requires that you load the appropriate adapter yourself. On my development side I use mysql, however Heroku uses PostgreSQL. For example, on my development side I need to do this: require 'ar-extensions/adapters/mysql' require 'ar-extensions/import/mysql' How can I audo-detect which adapter to use?

    Read the article

  • Synchronizing in SQL Replication works when manually syncing, but not automatically

    - by Dominic Zukiewicz
    I'm using SQL Server 2005 to create a replication copy of the main databases, so that the reports can point to the replication copy instead of locking out our main databases. I have set up the 3 databases as publications and then 3 subscribers moving the transactions over to the subscribers, instantaneously I hope! What seems to be happening is that when using the "Insert Tracer" function, replication take publisher to distributor < 2 seconds, but to replicate to the subscribers can take over 7 minutes (and these are local databases on a SAN). This could be for 2 reasons: The SQL statements used to query the database are obtaining locks which are stopping the transactions updating the subscribers. The subscribers are just too busy for the replication to apply the changes. What seems to trouble me more, is that although the Replication Monitor / Insert Tracer are showing these statistics, if you use the "View Subscription Details" and then click Start, it will sync within seconds. My goal would be to have the data syncing (ideally) continuously, or every minute, perhaps I should reduce the batch size of the transactions? What am I doing wrong? [Note that the -Continuous flag is set!]

    Read the article

  • Ruby on Rails and database associations

    - by Marco
    Hi to all, I'm new to the Ruby world, and there is something unclear to me in defining associations between models. The question is: where is the association saved? For example, if i create a Customer model by executing: generate model Customer name:string age:integer and then i create an Order model generate model Order description:text quantity:integer and then i set the association in the following way: class Customer < ActiveRecord::Base has_many :orders end class Order < ActiveRecord::Base belongs_to :customer end I think here is missing something, for example the foreign key between the two entities. How does it handle the associations created with the keywords "has_many" and "belongs_to" ? Thanks

    Read the article

  • Database strucure for versioning and multiple languages

    - by phobia
    Hi, I'm wondering how to best solve the issue of content existing in multiple versions and multiple languages. An image of my current structure can be seen here: http://i46.tinypic.com/72fx3k.png Each content can only have one active version in each language, and that's how I'm curious on how to best solve. Right now I have a column of the contentversions table, which means for each change of active version I have to run a update and set active=false on all version and then a update to set active=true for the piece of content in question. Any thoughts? :)

    Read the article

  • DataSet v/s Database

    - by Hemanshu Bhojak
    While designing applications it is a very good practice to have all the business logic in one place. So why then we sometimes have the business logic in stored procs? Can we fetch all data from the DB and store it in a DataSet and then process it? What would be the performance of the app in this scenario?

    Read the article

< Previous Page | 350 351 352 353 354 355 356 357 358 359 360 361  | Next Page >