Search Results

Search found 63598 results on 2544 pages for 'sql add on'.

Page 680/2544 | < Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >

  • How to reuse results with a schema for end of day stock-data

    - by Vishalrix
    I am creating a database schema to be used for technical analysis like top-volume gainers, top-price gainers etc.I have checked answers to questions here, like the design question. Having taken the hint from boe100 's answer there I have a schema modeled pretty much on it, thusly: Symbol - char 6 //primary Date - date //primary Open - decimal 18, 4 High - decimal 18, 4 Low - decimal 18, 4 Close - decimal 18, 4 Volume - int Right now this table containing End Of Day( EOD) data will be about 3 million rows for 3 years. Later when I get/need more data it could be 20 million rows. The front end will be asking requests like "give me the top price gainers on date X over Y days". That request is one of the simpler ones, and as such is not too costly, time wise, I assume. But a request like " give me top volume gainers for the last 10 days, with the previous 100 days acting as baseline", could prove 10-100 times costlier. The result of such a request would be a float which signifies how many times the volume as grown etc. One option I have is adding a column for each such result. And if the user asks for volume gain in 10 days over 20 days, that would require another table. The total such tables could easily cross 100, specially if I start using other results as tables, like MACD-10, MACD-100. each of which will require its own column. Is this a feasible solution? Another option being that I keep the result in cached html files and present them to the user. I dont have much experience in web-development, so to me it looks messy; but I could be wrong ( ofc!) . Is that a option too? Let me add that I am/will be using mod_perl to present the response to the user. With much of the work on mysql database being done using perl. I would like to have a response time of 1-2 seconds.

    Read the article

  • NULL-keys for key/value table

    - by user72185
    (Using Oracle) I have a table with key/value pairs like this: create table MESSAGE_INDEX ( KEY VARCHAR2(256) not null, VALUE VARCHAR2(4000) not null, MESSAGE_ID NUMBER not null ) I now want to find all the messages where key = 'someKey' and value is 'val1', 'val2' or 'val3' - OR value is null in which case there will be no entry in the table at all. This is to save space; there would be a large number of keys with null values if I stored them all. I think this works: SELECT message_id FROM message_index idx WHERE ((key = 'someKey' AND value IN ('val1', 'val2', 'val3')) OR NOT EXISTS (SELECT 1 FROM message_index WHERE key = 'someKey' AND idx.message_id = message_id)) But is is extremely slow. Takes 8 seconds with 700K records in message_index and there will be many more records and more search criteria when moving outside of my test environment. Primary key is key, value, message_id: add constraint PK_KEY_VALUE primary key (KEY, VALUE, MESSAGE_ID) And I added another index for message_id, to speed up searching for missing keys: create index IDX_MESSAGE_ID on MESSAGE_INDEX (MESSAGE_ID) I will be doing several of these key/value lookups in every search, not just one as shown above. So far I am doing them nested, where output id's of one level is the input to the next. E.g.: SELECT message_id from message_index WHERE (key/value compare) AND message_id IN ( SELECT ... and so on ) What can I do to speed this up?

    Read the article

  • PostgreSQL, Foreign Keys, Insert speed & Django

    - by Miles
    A few days ago, I ran into an unexpected performance problem with a pretty standard Django setup. For an upcoming feature, we have to regenerate a table hourly, containing about 100k rows of data, 9M on the disk, 10M indexes according to pgAdmin. The problem is that inserting them by whatever method literally takes ages, up to 3 minutes of 100% disk busy time. That's not something you want on a production site. It doesn't matter if the inserts were in a transaction, issued via plain insert, multi-row insert, COPY FROM or even INSERT INTO t1 SELECT * FROM t2. After noticing this isn't Django's fault, I followed a trial and error route, and hey, the problem disappeared after dropping all foreign keys! Instead of 3 minutes, the INSERT INTO SELECT FROM took less than a second to execute, which isn't too surprising for a table <= 20M on the disk. What is weird is that PostgreSQL manages to slow down inserts by 180x just by using 3 foreign keys. Oh, disk activity was pure writing, as everything is cached in RAM; only writes go to the disks. It looks like PostgreSQL is working very hard to touch every row in the referred tables, as 3MB/sec * 180s is way more data than the 20MB this new table takes on disk. No WAL for the 180s case, I was testing in psql directly, in Django, add ~50% overhead for WAL logging. Tried @commit_on_success, same slowness, I had even implemented multi row insert and COPY FROM with psycopg2. That's another weird thing, how can 10M worth of inserts generate 10x 16M log segments? Table layout: id serial primary, a bunch of int32, 3 foreign keys to small table, 198 rows, 16k on disk large table, 1.2M rows, 59 data + 89 index MB on disk large table, 2.2M rows, 198 + 210MB So, am I doomed to either drop the foreign keys manually or use the table in a very un-Django way by defining saving bla_id x3 and skip using models.ForeignKey? I'd love to hear about some magical antidote / pg setting to fix this.

    Read the article

  • Passing in a lambda to a Where statement

    - by sonicblis
    I noticed today that if I do this: var items = context.items.Where(i => i.Property < 2); items = items.Where(i => i.Property > 4); Once I access the items var, it executes only the first line as the data call and then does the second call in memory. However, if I do this: var items = context.items.Where(i => i.Property < 2).Where(i => i.Property > 4); I get only one expression executed against the context that includes both where statements. I have a host of variables that I want to use to build the expression for the linq lambda, but their presence or absence changes the expression such that I'd have to have a rediculous number of conditionals to satisfy all cases. I thought I could just add the Where() statements as in my first example above, but that doesn't end up in a single expression that contains all of the criteria. Therefore, I'm trying to create just the lambda itself as such: //bogus syntax if (var1 == "something") var expression = Expression<Func<item, bool>>(i => i.Property == "Something); if (var2 == "somethingElse") expression = expression.Where(i => i.Property2 == "SomethingElse"); And then pass that in to the where of my context.Items to evaluate. A) is this right, and B) if so, how do you do it?

    Read the article

  • Using application roles with DataReader

    - by Shahar
    I have an application that should use an application role from the database. I'm trying to make this work with queries that are actually run using Subsonic (2). To do this, I created my own DataProvider, which inherits from Subsonic's SqlDataProvider. It overrides the CreateConnection function, and calls sp_appsetrole to set the application role after the connection is created. This part works fine, and I'm able to get data using the application role. The problem comes when I try to unset the application role. I couldn't find any place in the code where my provider is called after the query is done, so I tried to add my own, by changing SubSonic code. The problem is that Subsonic uses a data reader. It loads data from the data reader, and then closes it. If I unset the application role before the data reader is closed, I get an error saying: There is already an open DataReader associated with this Command which must be closed first. If I unset the application role after the data reader is closed, I get an error saying ExecuteNonQuery requires an open and available Connection. The connection's current state is closed. I can't seem to find a way to close the data reader without closing the connection.

    Read the article

  • insert data to table based on another table C#

    - by user1017315
    I wrote a code which takes some values from one table and inserts the other table in these values.(not just these values, but also these values(this values=values from the based on table)) and I get this error: System.Data.OleDb.OleDbException (0x80040E10): value wan't given for one or more of the required parameters.` here's the code. I don't know what i've missed. string selectedItem = comboBox1.SelectedItem.ToString(); Codons cdn = new Codons(selectedItem); string codon1; int index; if (this.i != this.counter) { //take from the DataBase the matching codonsCodon1 to codonsFullName codon1 = cdn.GetCodon1(); //take the serialnumber of the last protein string connectionString = "Provider=Microsoft.ACE.OLEDB.12.0;" + "Data Source=C:\\Projects_2012\\Project_Noam\\Access\\myProject.accdb"; OleDbConnection conn = new OleDbConnection(connectionString); conn.Open(); string last= "SELECT proInfoSerialNum FROM tblProInfo WHERE proInfoScienceName = "+this.name ; OleDbCommand getSerial = new OleDbCommand(last, conn); OleDbDataReader dr = getSerial.ExecuteReader(); dr.Read(); index = dr.GetInt32(0); //add the amino acid to tblOrderAA using (OleDbConnection connection = new OleDbConnection(connectionString)) { string insertCommand = "INSERT INTO tblOrderAA(orderAASerialPro, orderAACodon1) " + " values (?, ?)"; using (OleDbCommand command = new OleDbCommand(insertCommand, connection)) { connection.Open(); command.Parameters.AddWithValue("orderAASerialPro", index); command.Parameters.AddWithValue("orderAACodon1", codon1); command.ExecuteNonQuery(); } } } EDIT:I put a messagebox after that line: index = dr.GetInt32(0); to see where is the problem, and i get the error before that.i don't see the messagebox

    Read the article

  • Why isn't the Cache invalidated after table update using the SqlCacheDependency?

    - by Jason
    I have been trying to get SqlCacheDependency working. I think I have everything set up correctly, but when I update the table, the item in the Cache isn't invalidated. Can you look at my code and see if I am missing anything? I enabled the Service Broker for the Sandbox database. I have placed the following code in the Global.asax file. I also restart IIS to make sure it is called. void Application_Start(object sender, EventArgs e) { SqlDependency.Start(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString); } I have placed this entry in the web.config file: <system.web> <caching> <sqlCacheDependency enabled="true" pollTime="10000"> <databases> <add name="Sandbox" connectionStringName="SandboxConnectionString"/> </databases> </sqlCacheDependency> </caching> </system.web> I call this code to put the item into the cache: protected void CacheDataSetButton_Click(object sender, EventArgs e) { using (SqlConnection sqlConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["SandboxConnectionString"].ConnectionString)) { using (SqlCommand sqlCommand = new SqlCommand("SELECT PetID, Name, Breed, Age, Sex, Fixed, Microchipped FROM dbo.Pets", sqlConnection)) { using (SqlDataAdapter sqlDataAdapter = new SqlDataAdapter(sqlCommand)) { DataSet petsDataSet = new DataSet(); sqlDataAdapter.Fill(petsDataSet, "Pets"); SqlCacheDependency petsSqlCacheDependency = new SqlCacheDependency(sqlCommand); Cache.Insert("Pets", petsDataSet, petsSqlCacheDependency, DateTime.Now.AddSeconds(10), Cache.NoSlidingExpiration); } } } } Then I bind the GridView with this code: protected void BindGridViewButton_Click(object sender, EventArgs e) { if (Cache["Pets"] != null) { GridView1.DataSource = Cache["Pets"] as DataSet; GridView1.DataBind(); } } Between attempts to DataBind the GridView, I change the table's values expecting it to invalidate the Cache["Pets"] item, but it seems to stay in the Cache indefinitely.

    Read the article

  • Removing "Using temporary; Using filesort" from this MySQL select+join+group by query

    - by claytontstanley
    I have the following query: select t.Chunk as LeftChunk, t.ChunkHash as LeftChunkHash, q.Chunk as RightChunk, q.ChunkHash as RightChunkHash, count(t.ChunkHash) as ChunkCount from chunksubset as t join chunksubset as q on t.ID = q.ID group by LeftChunkHash, RightChunkHash And the following explain table: id select_type table type possible_keys key key_len ref rows Extra 1 SIMPLE subsets ref PRIMARY,IDIndex,SubsetIndex SubsetIndex 767 const 522014 "Using where; Using temporary; Using filesort" 1 SIMPLE subsets eq_ref PRIMARY,IDIndex,SubsetIndex PRIMARY 771 sotero.subsets.Id,const 1 "Using where; Using index" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 "Using where" 1 SIMPLE c ref IDIndex IDIndex 4 sotero.subsets.Id 12 note the "using temporary; using filesort". When this query is run, I quickly run out of RAM (presumably b/c of the temp table), and then the HDD kicks in, and the query slows to a halt. I thought it might be an index issue, so I started adding a few that sort of made sense: Table Non_unique Key_name Seq_in_index Column_name Collation Cardinality Sub_part Packed Null Index_type Comment Index_comment chunks 0 PRIMARY 1 ChunkId A 17796190 NULL NULL BTREE chunks 1 ChunkHashIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 IDIndex 1 Id A 1483015 NULL NULL BTREE chunks 1 ChunkIndex 1 Chunk A 243783 NULL NULL BTREE chunks 1 ChunkTypeIndex 1 ChunkType A 2 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkIDIndex 2 ChunkId A 17796190 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByChunkTypeIndex 2 ChunkType A 261708 NULL NULL BTREE chunks 1 chunkHashByIDIndex 1 ChunkHash A 243783 NULL NULL BTREE chunks 1 chunkHashByIDIndex 2 Id A 17796190 NULL NULL BTREE But still using the temporary table. The db engine is MyISAM. How can I get rid of the using temporary; using filesort in this query? Just changing to InnoDB w/o explaining the underlying cause is not a particularly satisfying answer. Besides, if the solution is to just add the proper index, then that's much easier than migrating to another db engine.

    Read the article

  • Compare values in serialized column in Doctrine with Query Builder

    - by ReynierPM
    I'm building a FormType for a Symfony2 project but I need some Query Builder on the field since I need to compare some values with the one stored on DB and show the results. This is what I have: .... ->add('servicio', 'entity', array( 'mapped' => false, 'class' => 'ComunBundle:TipoServicio', 'property' => 'nombre', 'required' => true, 'label' => false, 'expanded' => true, 'multiple' => true, 'query_builder' => function (EntityRepository $er) { return $er->createQueryBuilder('ts') ->where('ts.tipo_usuario = (:tipo)') ->setParameter('tipo', 1); } )) .... But tipo_usuario at DB table is stored as serialized text for example: record1: value1 | a:1:{i:0;s:1:"1";} record2: value2 | a:4:{i:0;s:1:"1";i:1;s:1:"2";i:2;s:1:"3";i:3;s:1:"4";} I'll have two different forms (I don't know how to pass the Request to a form) in the first one I'll only show the first record and for the second one the first and second record for example: First form will show: checkbox: value1 Second form will show: checkbox: value1 checkbox: value2 I achieve this? Any help?

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • Sorting in dataview not happening properly(C#3.0)

    - by Newbie
    I have the following DataTable dt = new DataTable(); dt.Columns.Add("col1", typeof(string)); dt.Columns.Add("col2", typeof(string)); dt.Rows.Add("1", "r1"); dt.Rows.Add("1", "r2"); dt.Rows.Add("1", "r3"); dt.Rows.Add("1", "r4"); dt.Rows.Add("2", "r1"); dt.Rows.Add("2", "r2"); dt.Rows.Add("2", "r3"); dt.Rows.Add("2", "r4"); dt.Rows.Add("3", "r1"); dt.Rows.Add("3", "r2"); dt.Rows.Add("3", "r3"); dt.Rows.Add("3", "r4"); dt.Rows.Add("1", "r1"); dt.Rows.Add("1", "r2"); DataView dv = new DataView(dt); dv.Sort = "col1"; DataTable dResult = dv.Table; I am trying to sort the datatable with the help of dataview so that the result will be 1 r1 1 r2 ... 2 r1 2 r2 ... 3 r1 3 r2 ...... Means all 1's first followed by 2's and 3r's Even I tried with dt.DefaultView.Sort = "col1"; but no luck. But it is not happening. Only the result i am able to view in dv.Sort and not the datatable (dResult) I am using C#3.0. Please help Thanks

    Read the article

  • Moving from NHibernate to FluentNHibernate: assembly error (related to versions)?

    - by goober
    Not sure where to start, but I had gotten the most recent version of NHibernate, successfully mapped the most simple of business objects, etc. When trying to move to FluentNHibernate and do the same thing, I got this error message on build: "System.IO.FileLoadException: Could not load file or assembly 'NHibernate, Version=2.1.0.4000, Culture=neutral, PublicKeyToken=aa95f207798dfdb4' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference." Background: I'm new to Hibernate, NHibernate, and FluentNHibernate -- but not to .NET, C#, etc. Database I have a database table called Category: (PK) CategoryID (type: int), unique, auto-incrementing UserID (type: uniqueidentifier) -- given the value of the user Guid in ASP.NET database Title (type: varchar(50) -- the title of the category Components involved: I have a SessionProviderClass which creates the mapping to the database I have a Category class which has all the virtual methods for FluentNHibernate to override I have a CategoryMap : ClassMap class, which does the fluent mappings for the entity I have a CategoryRepository class that contains the method to add & save the category I have the TestCatAdd.aspx file which uses the CategoryRepository class. Would be happy to post code for any of those, but I'm not sure that it's necessary, as I think the issue is that somewhere there's a version conflict between what FluentNHibernate references and the NHibernate I have installed from before. Thanks in advance for any help you can give!

    Read the article

  • All symbols after & stripped

    - by user300413
    My query: mysql::getInstance()->update('requests', array('response' => mysql_real_escape_string($_POST['status'])), array('secret' => $_POST['secret'])); ?> If i wand to add string with "&" symbol, all symbols after "&" stripped. Example: string: !"?;%:?*()_+!@#$%^&*()_+ in database i see only: !"?;%:?*()_+!@#$%^ How to fix this? update function, if anyone need: function update($table, $updateList, $whereConditions) { $updateQuery = ''; foreach ($updateList as $key => $newValue) { if (!is_numeric($newValue)) { $newValue = "'" . $newValue . "'"; } if (strlen($updateQuery) == 0) { $updateQuery .= '`' . $key . '` = ' . $newValue; } else { $updateQuery .= ', `' . $key . '` = ' . $newValue; } } return $this->query('UPDATE ' . $table . ' SET ' . $updateQuery . $this->buildWhereClause($whereConditions)); }

    Read the article

  • what's the performance difference between int and varchar for primary keys

    - by user568576
    I need to create a primary key scheme for a system that will need peer to peer replication. So I'm planning to combine a unique system ID and a sequential number in some way to come up with unique ID's. I want to make sure I'll never run out of ID's, so I'm thinking about using a varchar field, since I could always add another character if I start running out. But I've read that integers are better optimized for this. So I have some questions... 1) Are integers really better optimized? And if they are, how much of a performance difference is there between varchars and integers? I'm going to use firebird for now. But I may switch later. Or possibly support multiple db's. So I'm looking for generalizations, if that's possible. 2) If integers are significantly better optimized, why is that? And is it likely that varchars will catch up in the future, so eventually it won't matter anyway? My varchar keys won't have any meaning, except for the unique system ID part. But I may want to obscure that somehow. Also, I plan to efficiently use all the bits of each character. I don't, for example, plan to code the integer 123 as the character string "123". So I don't think varchars will require more space than integers.

    Read the article

  • trouble connecting to MySql DB (PHP)

    - by user332817
    Hi I have the following PHP code to connect to my db. <?php ob_start(); $host="localhost"; // Host name $username="root"; // Mysql username $password=""; // Mysql password $db_name="test"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); ?> however I get the following error: Warning: mysql_connect() [function.mysql-connect]: [2002] A connection attempt failed because the connected party did not (trying to connect via tcp://localhost:3306) in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Warning: mysql_connect() [function.mysql-connect]: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond. in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 Fatal error: Maximum execution time of 30 seconds exceeded in C:\Program Files (x86)\EasyPHP-5.3.2i\www\checklogin.php on line 11 I am able to add a db/tables via phpmyadmin but I cant connect using php. here is a screenshot of my phpmyadmin page: http://img294.imageshack.us/img294/1589/sqls.jpg any help would be appreciated, thanks in advance.

    Read the article

  • How To Join Tables from Two Different Contexts with LINQ2SQL?

    - by RSolberg
    I have 2 data contexts in my application (different databases) and need to be able to query a table in context A with a right join on a table in context B. How do I go about doing this in LINQ2SQL? Why?: We are using a SaaS product for tracking our time, projects, etc. and would like to send new service requests to this product to prevent our team from duplicating data entry. Context A: This db stores service request information. It is a third party DB and we are not able to make changes to the structure of this DB as it could have unintended non-supportable consequences downstream. Context B: This data stores the "log" data of service requests that have been processed. My team and I have full control over this DB's structure, etc. Unprocessed service requests should find their way into this DB and another process will identify it as not being processed and send the record to the SaaS product. This is the query that I am looking to modify. I was able to do a !list.Contains(c.swHDCaseId) initially, but this cannot handle more than 2100 items. Is there a way to add a join to the other context? var query = (from c in contextA.Cases where monitoredInboxList.Contains(c.INBOXES.inboxName) select new { //setup fields here... });

    Read the article

  • Return multiple IDs from a function and use the result in a query

    - by NewK
    I have this function that returns me all children of a tree node: CREATE OR REPLACE FUNCTION fn_category_get_childs_v2(id_pai integer) RETURNS integer[] AS $BODY$ DECLARE ids_filhos integer array; BEGIN SELECT array ( SELECT category_id FROM category WHERE category_id IN ( (WITH RECURSIVE parent AS ( SELECT category_id , parent_id from category WHERE category_id = id_pai UNION ALL SELECT t.category_id , t.parent_id FROM parent INNER JOIN category t ON parent.category_id = t.parent_id ) SELECT category_id FROM parent WHERE category_id <> id_pai ) ) ) into ids_filhos; return ids_filhos; END; and I would like to use it in a select statement like this: select * from teste1_elements where category_id in (select * from fn_category_get_childs_v2(12)) I've also tried this way with the same result: select * from teste1_elements where category_id=any(select * from fn_category_get_childs_v2(12))) But I get the following error: ERROR: operator does not exist: integer = integer[] LINE 1: select * from teste1_elements where category_id in (select *... ^ HINT: No operator matches the given name and argument type(s). You might need to add explicit type casts. The function returns an integer array, is that the problem? SELECT * from fn_category_get_childs_v2(12) retrieves the following array (integer[]): '{30,32,34,20,19,18,17,16,15,14}'

    Read the article

  • Round date to 10 minutes interval

    - by Peter Lang
    I have a DATE column that I want to round to the next-lower 10 minute interval in a query (see example below). I managed to do it by truncating the seconds and then subtracting the last digit of minutes. WITH test_data AS ( SELECT TO_DATE('2010-01-01 10:00:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:05:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:09:59', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2010-01-01 10:10:00', 'YYYY-MM-DD HH24:MI:SS') d FROM dual UNION SELECT TO_DATE('2099-01-01 10:00:33', 'YYYY-MM-DD HH24:MI:SS') d FROM dual ) -- #end of test-data SELECT d, TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data And here is the result: 01.01.2010 10:00:00    01.01.2010 10:00:00 01.01.2010 10:05:00    01.01.2010 10:00:00 01.01.2010 10:09:59    01.01.2010 10:00:00 01.01.2010 10:10:00    01.01.2010 10:10:00 01.01.2099 10:00:33    01.01.2099 10:00:00 Works as expected, but is there a better way? EDIT: I was curious about performance, so I did the following test with 500.000 rows and (not really) random dates. I am going to add the results as comments to the provided solutions. DECLARE t TIMESTAMP := SYSTIMESTAMP; BEGIN FOR i IN ( WITH test_data AS ( SELECT SYSDATE + ROWNUM / 5000 d FROM dual CONNECT BY ROWNUM <= 500000 ) SELECT TRUNC(d, 'MI') - MOD(TO_CHAR(d, 'MI'), 10) / (24 * 60) FROM test_data ) LOOP NULL; END LOOP; dbms_output.put_line( SYSTIMESTAMP - t ); END; This approach took 03.24 s.

    Read the article

  • Advice on Linq to SQL mapping object design

    - by fearofawhackplanet
    I hope the title and following text are clear, I'm not very familiar with the correct terms so please correct me if I get anything wrong. I'm using Linq ORM for the first time and am wondering how to address the following. Say I have two DB tables: User ---- Id Name Phone ----- Id UserId Model The Linq code generator produces a bunch of entity classes. I then write my own classes and interfaces which wrap these Linq classes: class DatabaseUser : IUser { public DatabaseUser(User user) { _user = user; } public Guid Id { get { return _user.Id; } } ... etc } so far so good. Now it's easy enough to find a users phones from Phones.Where(p => p.User = user) but surely comsumers of the API shouldn't need to be writing their own Linq queries to get at data, so I should wrap this query in a function or property somewhere. So the question is, in this example, would you add a Phones property to IUser or not? In other words, should my interface specifically be modelling my database objects (in which case Phones doesn't belong in IUser), or are they actually simply providing a set of functions and properties which are conceptually associated with a User (in which case it does)? There seems drawbacks to both views, but I'm wondering if there is a standard approach to the problem. Or just any general words of wisdom you could share. My first thought was to use extension methods but in fact that doesn't work in this case.

    Read the article

  • How can i pull an image and data from a Database?

    - by user1851377
    I am trying to pull data from a Database using C#.net and use a Foreach loop to make it visible on a page. Every time i run the code i only get one item that shows up when i know that there is at least 7 items in the DB. i have placed the code below for the C#. SqlConnection oConnection = new SqlConnection(ConfigurationManager.ConnectionStrings["HomeGrownEnergyConnectionString"].ToString()); string sqlEnergy = "Select * from Product p where p.ProductTypeId=3"; SqlCommand oCmd = new SqlCommand(sqlEnergy, oConnection); DataTable dtenergy = new DataTable(); SqlDataAdapter oDa = new SqlDataAdapter(oCmd); try { oConnection.Open(); ; oDa.Fill(dtenergy); } catch (Exception ex) { lblnodata.Text = ex.Message; return; } finally { oConnection.Close(); } DataTableReader results = dtenergy.CreateDataReader(); if (results.HasRows) { results.Read(); foreach(DataRow result in dtenergy.Rows) { byte[] imgProd = result["ThumnailLocation"] as byte[]; ID.Text = result["ProductID"].ToString(); Name.Text = result["Name"].ToString(); price.Text = FormatPriceColumn(result["Price"].ToString()); } } Here is the code for the asp.net. <div> <asp:Image ID="imgProd" CssClass="ProdImg" runat="server" /> <asp:Label runat="server" ID="ID" /> <asp:Label runat="server" ID="Name" /> <asp:Label runat="server" ID="price" /> <asp:TextBox ID="txtQty" MaxLength="3" runat="server" Width="30px" /> <asp:Button runat="server" ID="Addtocart" Text="Add To Cart" CommandName="AddToCart" ItemStyle-CssClass="btnCol" /> If someone could please help me that would be great thanks.

    Read the article

  • Indexing table with duplicates MySQL/MSSQL with millions of records

    - by Tesnep
    I need help in indexing in MySQL. I have a table in MySQL with following rows: ID Store_ID Feature_ID Order_ID Viewed_Date Deal_ID IsTrial The ID is auto generated. Store_ID goes from 1 - 8. Feature_ID from 1 - let's say 100. Viewed Date is Date and time on which the data is inserted. IsTrial is either 0 or 1. You can ignore Order_ID and Deal_ID from this discussion. There are millions of data in the table and we have a reporting backend that needs to view the number of views in a certain period or overall where trial is 0 for a particular store id and for a particular feature. The query takes the form of: select count(viewed_date) from theTable where viewed_date between '2009-12-01' and '2010-12-31' and store_id = '2' and feature_id = '12' and Istrial = 0 In MSSQL you can have a filtered index to use for Istrial. Is there anything similar to this in MySQL? Also, Store_ID and Feature_ID have a lot of duplicate data. I created an index using Store_ID and Feature_ID. Although this seems to have decreased the search period, I need better improvement than this. Right now I have more than 4 million rows. To search for a particular query like the one above, it looks at 3.5 million rows in order to give me the count of 500k rows. PS. I forgot to add view_date filter in the query. Now I have done this.

    Read the article

  • connecting to mysql from excel: ODBC driver does not support the requested properties.

    - by every_answer_gets_a_point
    i am trying to add data to mysql from excel. i am getting the above error on this line: rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic here is my code: Dim oConn As ADODB.Connection Private Sub ConnectDB() Set oConn = New ADODB.Connection oConn.Open "DRIVER={MySQL ODBC 5.1 Driver};" & _ "SERVER=localhost;" & _ "DATABASE=employees;" & _ "USER=root;" & _ "PASSWORD=some_pass;" & _ "Option=3" End Sub Function esc(txt As String) esc = Trim(Replace(txt, "'", "\'")) End Function Private Sub InsertData() Dim rs As ADODB.Recordset Set rs = New ADODB.Recordset ConnectDB With wsBooks For rowCursor = 2 To 11 strSQL = "INSERT INTO tutorial (author, title, price) " & _ "VALUES ('" & esc(.Cells(rowCursor, 1)) & "', " & _ "'" & esc(.Cells(rowCursor, 2)) & "', " & _ esc(.Cells(rowCursor, 3)) & ")" rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic Next End With End Sub whats wrong with rs.Open strSQL, oConn, adOpenDynamic, adLockOptimistic ? why am i getting the odbc error?

    Read the article

  • How would I associate a "Note" class to 4+ classes without creating lookup table for each associatio

    - by Gthompson83
    Im creating a project tasklist application. I have project, section, task, issue classes, and would like to use one class to be able to add simple notes to any object instance of those classes. The task, issue tables both use a standard identity field as a primary key. The section table has a two field primary key. The project table has a single int primary key defined by the user. Is there a way to associate the note class with each of these without using a seperate lookup table for each class? I'm not so sure my original idea is a decent way to implement this. I considered the following (each variable mapping to a field n the notes table. Private _NoteId As Integer Private _ProjectId As Integer Private _SectionId As Integer Private _SectionId2 As Integer Private _TaskId As Integer Private _IssueId As Integer Private _Note As String Private _UserId As Guid Then I would be able to write seperate methods (getProjectNotes, getTaskNotes) to get notes attached to each class. I started writing this a few weeks ago but got pulled away before i could finish. When revisiting this code today my first thought "this is retarded". Thoughts on drawbacks to this design?

    Read the article

  • Two references to the same domain/entity model

    - by Sbossb
    Problem I want to save the attributes of a model that have changed when a user edits them. Here's what I want to do ... Retrieve edited view model Get domain model and map back updated value Call the update method on repository Get the "old" domain model and compare values of the fields Store the changed values (in JSON) into a table However I am having trouble with step number 4. It seems that the Entity Framework doesn't want to hit the database again to get the model with the old values. It just returns the same entity I have. Attempted Solutions I have tried using the Find() and the SingleOrDefault() methods, but they just return the model I currently have. Example Code private string ArchiveChanges(T updatedEntity) { //Here is the problem! //oldEntity is the same as updatedEntity T oldEntity = DbSet.SingleOrDefault(x => x.ID == updatedEntity.ID); Dictionary<string, object> changed = new Dictionary<string, object>(); foreach (var propertyInfo in typeof(T).GetProperties()) { var property = typeof(T).GetProperty(propertyInfo.Name); //Get the old value and the new value from the models var newValue = property.GetValue(updatedEntity, null); var oldValue = property.GetValue(oldEntity, null); //Check to see if the values are equal if (!object.Equals(newValue, oldValue)) { //Values have changed ... log it changed.Add(propertyInfo.Name, newValue); } } var ser = new System.Web.Script.Serialization.JavaScriptSerializer(); return ser.Serialize(changed); } public override void Update(T entityToUpdate) { //Do something with this string json = ArchiveChanges(entityToUpdate); entityToUpdate.AuditInfo.Updated = DateTime.Now; entityToUpdate.AuditInfo.UpdatedBy = Thread.CurrentPrincipal.Identity.Name; base.Update(entityToUpdate); }

    Read the article

  • in haskell, why do I need to specify type constraints, why can't the compiler figure them out?

    - by Steve
    Consider the function, add a b = a + b This works: *Main> add 1 2 3 However, if I add a type signature specifying that I want to add things of the same type: add :: a -> a -> a add a b = a + b I get an error: test.hs:3:10: Could not deduce (Num a) from the context () arising from a use of `+' at test.hs:3:10-14 Possible fix: add (Num a) to the context of the type signature for `add' In the expression: a + b In the definition of `add': add a b = a + b So GHC clearly can deduce that I need the Num type constraint, since it just told me: add :: Num a => a -> a -> a add a b = a + b Works. Why does GHC require me to add the type constraint? If I'm doing generic programming, why can't it just work for anything that knows how to use the + operator? In C++ template programming, you can do this easily: #include <string> #include <cstdio> using namespace std; template<typename T> T add(T a, T b) { return a + b; } int main() { printf("%d, %f, %s\n", add(1, 2), add(1.0, 3.4), add(string("foo"), string("bar")).c_str()); return 0; } The compiler figures out the types of the arguments to add and generates a version of the function for that type. There seems to be a fundamental difference in Haskell's approach, can you describe it, and discuss the trade-offs? It seems to me like it would be resolved if GHC simply filled in the type constraint for me, since it obviously decided it was needed. Still, why the type constraint at all? Why not just compile successfully as long as the function is only used in a valid context where the arguments are in Num? Thank you.

    Read the article

< Previous Page | 676 677 678 679 680 681 682 683 684 685 686 687  | Next Page >