Search Results

Search found 9929 results on 398 pages for 'azure tables'.

Page 332/398 | < Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >

  • How to use SQLAlchemy to dump an SQL file from query expressions to bulk-insert into a DBMS?

    - by Mahmoud Abdelkader
    Please bear with me as I explain the problem, how I tried to solve it, and my question on how to improve it is at the end. I have a 100,000 line csv file from an offline batch job and I needed to insert it into the database as its proper models. Ordinarily, if this is a fairly straight-forward load, this can be trivially loaded by just munging the CSV file to fit a schema, but I had to do some external processing that requires querying and it's just much more convenient to use SQLAlchemy to generate the data I want. The data I want here is 3 models that represent 3 pre-exiting tables in the database and each subsequent model depends on the previous model. For example: Model C --> Foreign Key --> Model B --> Foreign Key --> Model A So, the models must be inserted in the order A, B, and C. I came up with a producer/consumer approach: - instantiate a multiprocessing.Process which contains a threadpool of 50 persister threads that have a threadlocal connection to a database - read a line from the file using the csv DictReader - enqueue the dictionary to the process, where each thread creates the appropriate models by querying the right values and each thread persists the models in the appropriate order This was faster than a non-threaded read/persist but it is way slower than bulk-loading a file into the database. The job finished persisting after about 45 minutes. For fun, I decided to write it in SQL statements, it took 5 minutes. Writing the SQL statements took me a couple of hours, though. So my question is, could I have used a faster method to insert rows using SQLAlchemy? As I understand it, SQLAlchemy is not designed for bulk insert operations, so this is less than ideal. This follows to my question, is there a way to generate the SQL statements using SQLAlchemy, throw them in a file, and then just use a bulk-load into the database? I know about str(model_object) but it does not show the interpolated values. I would appreciate any guidance for how to do this faster. Thanks!

    Read the article

  • How to read a database record with a DataReader and add it to a DataTable

    - by Olga
    Hello I have some data in a Oracle database table(around 4 million records) which i want to transform and store in a MSSQL database using ADO.NET. So far i used (for much smaller tables) a DataAdapter to read the data out of the Oracle DataBase and add the DataTable to a DataSet for further processing. When i tried this with my huge table, there was a outofmemory exception thrown. ( I assume this is because i cannot load the whole table into my memory) :) Now i am looking for a good way to perform this extract/transfer/load, without storing the whole table in the memory. I would like to use a DataReader and read the single dataRecords in a DataTable. If there are about 100k rows in it, I would like to process them and clear the DataTable afterwards(to have free memory again). Now i would like to know how to add a single datarecord as a row to a dataTable with ado.net and how to completly clear the dataTable out of memory: My code so far: Dim dt As New DataTable Dim count As Int32 count = 0 ' reads data records from oracle database table' While rdr.Read() 'read n records and add them to a dataTable' While count < 10000 dt.Rows.Add(????) count = count + 1 End While 'transform data in the dataTable, and insert it to the destination' ' flush the dataTable after insertion' count = 0 End While Thank you very much for your response!

    Read the article

  • MySQL & PHP - select/option lists and showing data to users that still allows me to generate queries

    - by Andrew Heath
    Sorry for the unclear title, an example will clear things up: TABLE: Scenario_victories ID scenid timestamp userid side playdate 1 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 2 RtBr001 2010-03-15 17:13:36 7 1 2010-03-10 3 RtBr001 2010-03-15 17:13:51 7 2 2010-03-10 ID and timestamp are auto-insertions by the database when the other 4 fields are added. The first thing to note is that a user can record multiple playings of the same scenario (scenid) on the same date (playdate) possibly with the same outcome (side = winner). Hence the need for the unique ID and timestamps for good measure. Now, on their user page, I'm displaying their recorded play history in a <select><option>... list form with 2 buttons at the end - Delete Record and Go to Scenario My script takes the scenid and after hitting a few other tables returns with something more user-friendly like: (playdate) (from scenid) (from side) ######################################################### # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Germany, Hungary won # # 2010-03-10 Road to Berlin #1 -- Soviet Union won # ######################################################### [Delete Record] [Go To Scenario] in HTML: <select name="history" size=3> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Germany, Hungary won</option> <option>2010-03-10 Road to Berlin #1 -- Soviet Union won</option> </select> Now, if you were to highlight the first record and click Go to Scenario there is enough information there for me to parse it and produce the exact scenario you want to see. However, if you were to select Delete Record there is not - I have the playdate and I can parse the scenid and side from what's listed, but in this example all three records would have the same result. I appear to have painted myself into a corner. Does anyone have a suggestion as to how I can get some unique identifying data (ID and/or timestamp) to ride along on this form without showing it to the user? PHP-only please, I must be NoScript compliant!

    Read the article

  • Strange error when filling a data adapter.

    - by Tim C
    I am receiving the following error in my code (c#, .Net 3.5, VS2008) when I try to connect to an Excel sheet and fill a OleDbDataAdapter with the results of a query. First the error: Attempted to read or write protected memory. This is often an indication that other memory is corrupt. And here is the code, which is honestly pretty simple: var excelFileName = string.Format("c:/Metadata_Tool.xlsm"); var connectionString = string.Format("Provider=Microsoft.ACE.OLEDB.12.0; Data Source={0}; Extended Properties=Excel 12.0;HDR=YES;", excelFileName); var adapter = new OleDbDataAdapter("Select * FROM [Video Tagging XML]", connectionString); var ds = new DataSet(); adapter.Fill(ds, "VTX"); DataTable data = ds.Tables["VTX"]; foreach (DataRow myRow in data.Rows) { foreach (DataColumn myColumn in data.Columns) { Console.Write("\t{0}", myRow[myColumn]); } Console.WriteLine(); } Console.ReadLine(); I get the error on the line adapter.Fill(ds,"VTX");. I did find a microsoft forum post saying to turn on JIT optimization in VS2008 from the Tools/Options/Debug/General menu, but this did not seem to help. Any help would be greatly appreciated thanks!

    Read the article

  • Hibernate bit array to entity mapping

    - by teabot
    I am trying to map a normalized Java model to a legacy database schema using Hibernate 3.5. One particular table encodes a foreign keys in a one-to-many relationship as a bit array column. Consider tables 'person' and 'clubs' that describes people's affiliations to clubs: person .----.------. club: .----.---------.---------------------------. | id | name | | id | name | members | binary(members) | |----+------| |----+---------|---------+-----------------| | 1 | Bob | | 10 | Cricket | 0 | 000 | | 2 | Joe | | 11 | Tennis | 5 | 101 | | 3 | Sue | | 12 | Cooking | 7 | 111 | '----'------' | 13 | Golf | 3 | 100 | '----'---------'---------'-----------------' So hopefully it is clear that person.id is used as the bit index in the bit array club.members. In this example the members column tells us that: no one is a member of Cricket, Bob/Sue - Tennis, Bob/Sue/Joe - Cooking and Sue - Golf. In my Java domain I'd like to declare this with entities like so: class Person { private int id; private String name; ... } class Club { private Set<Person> members; private int id; private String name; ... } I am assuming that I must use a UserType implementation but have been unable to find any examples where the items described by the user type are references to entities - not literal field values - or composites thereof. Additionally I am aware that I'll have to consider how the person entities are fetched when a club instance is loaded. Can anyone tell me how I can tame this legacy schema with Hibernate?

    Read the article

  • Understanding evaluation of expressions containing '++' and '->' operators in C.

    - by Leif Ericson
    Consider this example: struct { int num; } s, *ps; s.num = 0; ps = &s; ++ps->num; printf("%d", s.num); /* Prints 1 */ It prints 1. So I understand that it is because according to operators precedence, -> is higher than ++, so the value ps->num (which is 0) is firstly fetched and then the ++ operator operates on it, so it increments it to 1. struct { int num; } s, *ps; s.num = 0; ps = &s; ps++->num; printf("%d", s.num); /* Prints 0 */ In this example I get 0 and I don't understand why; the explanation of the first example should be the same for this example. But it seems that this expression is evaluated as follows: At first, the operator ++ operates, and it operates on ps, so it increments it to the next struct. Only then -> operates and it does nothing because it just fetches the num field of the next struct and does nothing with it. But it contradicts the precedence of operators, which says that -> have higher precedence than ++. Can someone explain this behavior? Edit: After reading two answers which refer to a C++ precedence tables which indicate that a prefix ++/-- operators have lower precedence than ->, I did some googling and came up with this link that states that this rule applies also to C itself. It fits exactly and fully explains this behavior, but I must add that the table in this link contradicts a table in my own copy of K&R ANSI C. So if you have suggestions as to which source is correct I would like to know. Thanks.

    Read the article

  • Full text index requires dropping and recreating - why?

    - by Amjid Qureshi
    Hi all, So I've got a web app running on .net 3.5 connected to a SQL 2005 box. We do scheduled releases every 2 weeks. About 14 tables out of 250 are full text indexed. After not every release, but a few too many, the indexes crap out. They seem to have data in there, but when we try to search them from the front end or SQL enterprise we get timeouts/hangs. We have a script that disables the indexes, drops them, deletes the catalog and then re creates the indexes. This fixes the problem 99 times out of 100. and the one other time, we run the script again and it all works We have tried just rebuilding the fulltext index but that doesn't fix the issue. My question is why do we have to do this ? what can we do to sort the index out? Here is a bit of the script, IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) ALTER FULLTEXT INDEX ON [dbo].[Address] DISABLE GO IF EXISTS (SELECT * FROM sys.fulltext_indexes fti WHERE fti.object_id = OBJECT_ID(N'[dbo].[Address]')) DROP FULLTEXT INDEX ON [dbo].[Address] GO IF EXISTS (SELECT * FROM sysfulltextcatalogs ftc WHERE ftc.name = N'DbName.FullTextCatalog') DROP FULLTEXT CATALOG [DbName.FullTextCatalog] GO -- may need this line if we get an error BACKUP LOG SMS2 WITH TRUNCATE_ONLY CREATE FULLTEXT CATALOG [DbName.FullTextCatalog] ON FILEGROUP [FullTextCatalogs] IN PATH N'F:\Data' AS DEFAULT AUTHORIZATION [dbo] CREATE FULLTEXT INDEX ON [Address](CommonPlace LANGUAGE 'ENGLISH') KEY INDEX PK_Address ON [DbName.FullTextCatalog] WITH CHANGE_TRACKING AUTO go

    Read the article

  • Duplicate column name by JPA with @ElementCollection and @Inheritance

    - by gerry
    I've created the following scenario: @javax.persistence.Entity @Inheritance(strategy = InheritanceType.TABLE_PER_CLASS) public class MyEntity implements Serializable{ @Id @GeneratedValue protected Long id; ... @ElementCollection @CollectionTable(name="ENTITY_PARAMS") @MapKeyColumn (name = "ENTITY_KEY") @Column(name = "ENTITY_VALUE") protected Map<String, String> parameters; ... } As well as: @javax.persistence.Entity public class Sensor extends MyEntity{ @Id @GeneratedValue protected Long id; ... // so here "protected Map<String, String> parameters;" is inherited !!!! ... } So running this example, no tables are created and i get the following message: WARNUNG: Got SQLException executing statement "CREATE TABLE ENTITY_PARAMS (Entity_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255), ENTITY_KEY VARCHAR(255), Sensor_ID BIGINT NOT NULL, ENTITY_VALUE VARCHAR(255))": com.mysql.jdbc.exceptions.jdbc4.MySQLSyntaxErrorException: Duplicate column name 'ENTITY_VALUE' I also tried overriding the attributes on the Sensor class... @AttributeOverrides({ @AttributeOverride(name = "ENTITY_KEY", column = @Column(name = "SENSOR_KEY")), @AttributeOverride(name = "ENTITY_VALUE", column = @Column(name = "SENSOR_VALUE")) }) ... but the same error. Can anybody help me?

    Read the article

  • Several Small, Specific, MySQL Query Cache Questions

    - by Robbie
    I've look all over the web and in the questions asked here about MySQL caching and most of them seem very non-specific about a couple of questions that I have about performance and MySQL query caching. Specifically I want answers to these questions, assume for all questions that I have the query cache enabled and it is of type 2, or "DEMAND": Is the query cache per table, per database, or per server? Meaning if I have the cache size set to X and have T tables and D databases will I be caching TX, DX, or X amount of data? If I have table T1 which I regularly use the SQL_CACHE hint on for SELECT queries and table T2 which I never do, when I query T2 with a SELECT query will it check through the cache first before performing the query? *Note: I don't want to use the SQL_NO_CACHE for all T2 queries.* Assume the same situation as in question 2. If I alter (INSERT, DELETE) table T2 will any processing be done on the cache? For answers to 2 and 3, is this processing time negligible if T2 is constantly being altered and is the target of a majority of my SELECT queries?

    Read the article

  • Converting table columns to key value pairs

    - by TomD1
    I am writing a PL/SQL procedure that loads some data from Schema A into Schema B. They are both very different schemas and I can't change the structure of Schema B. Columns in various tables in Schema A (joined together in a view) need to be inserted into Schema B as key=value pairs in 2 columns in a table, each on a separate row. For example, an employee's first name might be present as employee.firstname in Schema A, but would need to be entered in Schema B as: id=>1, key=>'A123', value=>'Smith' There are almost 100 keys, with the potential for more to be added in future. This means I don't really want to hardcode any of these keys. Sample code: create table schema_a_employees ( emp_id number(8,0), firstname varchar2(50), surname varchar2(50) ); insert into schema_a_employees values ( 1, 'James', 'Smith' ); insert into schema_a_employees values ( 2, 'Fred', 'Jones' ); create table schema_b_values ( emp_id number(8,0), the_key varchar2(5), the_value varchar2(200) ); I thought an elegant solution would most likely involve a lookup table to determine what value to insert for each key, and doesn't involve effectively hardcoding dozens of similar statements like.... insert into schema_b_values ( 1, 'A123', v_firstname ); insert into schema_b_values ( 1, 'B123', v_surname ); What I'd like to be able to do is have a local lookup table in Schema A that lists all the keys from Schema B, along with a column that gives the name of the column in the table in Schema A that should be used to populate, e.g. key "A123" in Schema B should be populated with the value of the column "firstname" in Schema A, e.g. create table schema_a_lookup ( the_key varchar2(5), the_local_field_name varchar2(50) ); insert into schema_a_lookup values ( 'A123', 'firstname' ); insert into schema_a_lookup values ( 'B123', 'surname' ); But I'm not sure how I could dynamically use values from the lookup table to tell Oracle which columns to use. So my question is, is there an elegant solution to populate schema_b_values table with the data from schema_a_employees without hardcoding for every possible key (i.e. A123, B123, etc)? Cheers.

    Read the article

  • Sorting GridView Formed With Data Set

    - by nani
    Following Code is for Sorting GridView Formed With DataSet Source: http://www.highoncoding.com/Articles/176_Sorting_GridView_Manually_.aspx But it is not displaying any output. There is no problem in sql connection. I am unable to trace the error, please help me. Thank You. public partial class _Default : System.Web.UI.Page { private const string ASCENDING = " ASC"; private const string DESCENDING = " DESC"; private DataSet GetData() { SqlConnection cnn = new SqlConnection("Server=localhost;Database=Northwind;Trusted_Connection=True;"); SqlDataAdapter da = new SqlDataAdapter("SELECT TOP 5 firstname,lastname,hiredate FROM EMPLOYEES", cnn); DataSet ds = new DataSet(); da.Fill(ds); return ds; } public SortDirection GridViewSortDirection { get { if (ViewState["sortDirection"] == null) ViewState["sortDirection"] = SortDirection.Ascending; return (SortDirection)ViewState["sortDirection"]; } set { ViewState["sortDirection"] = value; } } protected void GridView1_Sorting(object sender, GridViewSortEventArgs e) { string sortExpression = e.SortExpression; if (GridViewSortDirection == SortDirection.Ascending) { GridViewSortDirection = SortDirection.Descending; SortGridView(sortExpression, DESCENDING); } else { GridViewSortDirection = SortDirection.Ascending; SortGridView(sortExpression, ASCENDING); } } private void SortGridView(string sortExpression, string direction) { // You can cache the DataTable for improving performance DataTable dt = GetData().Tables[0]; DataView dv = new DataView(dt); dv.Sort = sortExpression + direction; GridView1.DataSource = dv; GridView1.DataBind(); } } aspx page asp:GridView ID="GridView1" runat="server" AllowSorting="True" OnSorting="GridView1_Sorting" /asp:GridView

    Read the article

  • SQL Where Clause Against View

    - by Adam Carr
    I have a view (actually, it's a table valued function, but the observed behavior is the same in both) that inner joins and left outer joins several other tables. When I query this view with a where clause similar to SELECT * FROM [v_MyView] WHERE [Name] like '%Doe, John%' ... the query is very slow, but if I do the following... SELECT * FROM [v_MyView] WHERE [ID] in ( SELECT [ID] FROM [v_MyView] WHERE [Name] like '%Doe, John%' ) it is MUCH faster. The first query is taking at least 2 minutes to return, if not longer where the second query will return in less than 5 seconds. Any suggestions on how I can improve this? If I run the whole command as one SQL statement (without the use of a view) it is very fast as well. I believe this result is because of how a view should behave as a table in that if a view has OUTER JOINS, GROUP BYS or TOP ##, if the where clause was interpreted prior to vs after the execution of the view, the results could differ. My question is why wouldn't SQL optimize my first query to something as efficient as my second query?

    Read the article

  • Why is iPdb not displaying STOUT after my input?

    - by BryanWheelock
    I can't figure out why ipdb is not displaying stout properly. I'm trying to debug why a test is failing and so I attempt to use ipdb debugger. For some reason my Input seems to be accepted, but the STOUT is not displayed until I (c)ontinue. Is this something broken in ipdb? It makes it very difficult to debug a program. Below is an example ipdb session, notice how I attempt to display the values of the attributes with: user.is_authenticated() user_profile.reputation user.is_superuser The results are not displayed until 'begin captured stdout ' In [13]: !python manage.py test Creating test database... < SNIP remove loading tables nosetests ...E.. /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_authenticated() user_profile.reputation user.is_superuser c F /Users/Bryan/work/APP/forum/auth.py(93)can_retag_questions() 92 import ipdb; ipdb.set_trace() ---> 93 return user.is_authenticated() and ( 94 RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or c .....EE...... FAIL: test_can_retag_questions (APP.forum.tests.test_views.AuthorizationFunctionsTestCase) Traceback (most recent call last): File "/Users/Bryan/work/APP/../APP/forum/tests/test_views.py", line 71, in test_can_retag_questions self.assertTrue(auth.can_retag_questions(user)) AssertionError: -------------------- begin captured stdout << --------------------- ipdb True ipdb 4001 ipdb False ipdb --------------------- end captured stdout << ---------------------- Ran 20 tests in 78.032s FAILED (errors=3, failures=1) Destroying test database... In [14]: Here is the actual test I'm trying to run: def can_retag_questions(user): """Determines if a User can retag Questions.""" user_profile = user.get_profile() import ipdb; ipdb.set_trace() return user.is_authenticated() and ( RETAG_OTHER_QUESTIONS <= user_profile.reputation < EDIT_OTHER_POSTS or user.is_superuser) I've also tried to use pdb, but that doesn't display anything. I see my test progress .... , and then nothing and not responsive to keyboard input. Is this a problem with readline?

    Read the article

  • How update DB table with DataSet

    - by Paul
    I am begginer with ADO.NET , I try update table with DataSet. O client side I have dataset with one table. I send this dataset on service side (it is ASP.NET Web Service). On Service side I try update table in database, but it dont 't work. public bool Update(DataSet ds) { SqlConnection conn = null; SqlDataAdapter da = null; SqlCommand cmd = null; try { string sql = "UPDATE * FROM Tab1"; string connStr = WebConfigurationManager.ConnectionStrings["Employees"].ConnectionString; conn = new SqlConnection(connStr); conn.Open(); cmd=new SqlCommand(sql,conn); da = new SqlDataAdapter(sql, conn); da.UpdateCommand = cmd; da.Update(ds.Tables[0]); return true; } catch (Exception ex) { throw ex; } finally { if (conn != null) conn.Close(); if (da != null) da.Dispose(); } } Where can be problem?

    Read the article

  • How to assign .net membership roles to individual database records

    - by mdresser
    I'm developing a system where we want to restrict the availability of information displayed to users based on their roles. e.g. I have a tabled called EventType (ID, EventTypeDescription) which contains the following records: 1, 'Basic Event' 2, 'Intermediate Event' 3, 'Admin Event' What I need to achieve is to filter the records returned based on the username (and hence role) of the logged-in user. e.g if an advanced user is logged in they will see all the event types, if the standard user is logged in they will only see the basic event type etc. Ideally id like to do this in a way which can be easily extended to other tables as necessary. So I'd like to avoid simply adding a 'Roles' field to each table where the data is user context sensitive. One idea I'm thinking of is to create some kind of permissions table like: PermissionsTable ( ID, Aspnet_RoleId, TableName, PrimaryKeyValue ) this has the drawback of using this is obviously having to use the table name to switch which table to join onto. Edit: In the absence of any better suggestions, I'm going to go with the last idea I mentioned, but instead of having a TableName field, I'm going to normalise the TableName out to it's own table as follows: TableNames ( ID, TableName ) UserPermissionsTable ( ID, Aspnet_UserId, TableID, PrimaryKeyValue )

    Read the article

  • Magento resource model for table with compound primary key

    - by sdek
    I am creating a custom module for a Magento ecommerce site, and the module will center around a new (ie, custom) table that has a compound/composite primary key, or rather the table has two columns that make up the primary key. Does anybody know how to create your models/resource models based on a table with a compound key? To give a few more details, I have looked up several tutorials and also used the excellent moduleCreator script. But it seems like all the tutorials revolve around the table having a PK with just one column in it. Something like this: class <Namespace>_<Module>_Model_Mysql4_<Module> extends Mage_Core_Model_Mysql4_Abstract { public function _construct(){ $this->_init('<module_alias>/<table_alias>', '<table_primary_key_id>'); } } Also, I just noticed that looking at the database model almost all tables have a single primary key. I understand this has much to do with the EAV-style db structure, but still is it possible to use a table with a compound PK? I want to stick with the Magento framework/conventions if possible. Is it discouraged? Should I just change the structure of my custom table to have some dummy id column? I do have the ability to do that, but geez! (Another side note that I thought I would mention is that it looks like the Zend Framework provides a way to base a class on a table with compound primary key (see Example #20 on this page - about half-way down), so it seems that the Magento framework should also provide for it... I just don't see how.)

    Read the article

  • Delphi 2009 dbExpress and Interbase: Unicode migration steps and risks?

    - by mjustin
    Currently, our database uses Win1252 as the only character encoding. We will have to support Unicode in the database tables soon, which means we have to perform this migration for four databases and around 80 Delphi applications which run in-house in a 24/7 environment. Are there recommendations for database migrations to UTF-8 (or UNICODE_FSS) for Delphi applications? Some questions listed below. Many thanks in advance for your answers! are there tools which help with the migration of the existing databases (sizes between 250 MB and 2 GB, no Blob fields), by dumping the data, recreating the database with UNICODE_FSS or UTF-8, and loading the data back? are there known problems with Delphi 2009, dbExpress and Interbase 7.5 related to Unicode character sets? would you recommend to upgrade the databases to Interbase 2009 first? (This upgrade is planned but does not have a high priority) can we simply migrate the database and Delphi will handle the Unicode character sets automatically, or will we have to change all character field types in every Datamodule (dfm and source code) too? which strategy would you recommend to work on the migration in parallel with the normal development and maintenance of the existing application? The application runs in-house so development and database administration is done internally. Update: one problem I found now is that there are two different persistent field types for Unicode and non Unicode character fields. For the existing database, dbExpress creates TStringField objects. For the Unicode database fields, dbExpress creates (or expects!) TWideStringField objects. This looks like a lot of work lies ahead. While we could try to avoid persistent fields (and add calculated fields at run time), Of course we would prefer a solution which does not require so many changes in existing units and DFM files.

    Read the article

  • Setting database-agnostic default column timestamp using Hibernate

    - by unsquared
    I'm working on a java project full of Hibernate (3.3.1) mapping files that have the following sort of declaration for most domain objects. <property name="dateCreated" generated="insert"> <column name="date_created" default="getdate()" /> </property> The problem here is that getdate() is an MSSQL specific function, and when I'm using something like H2 to test subsections of the project, H2 screams that getdate() isn't a recognized function. It's own timestamping function is current_timestamp(). I'd like to be able to keep working with H2 for testing, and wanted to know whether there was a way of telling Hibernate "use this database's own mechanism for retrieving the current timestamp". With H2, I've come up with the following solution. CREATE ALIAS getdate AS $$ java.util.Date now() { return new java.util.Date(); } $$; CALL getdate(); It works, but is obviously H2 specific. I've tried extending H2Dialect and registering the function getdate(), but that doesn't seem to be invoked when Hibernate is creating tables. Is it possible to abstract the idea of a default timestamp away from the specific database engine?

    Read the article

  • Rails - Logic for finding info from a :has_many :through needed!

    - by Jty.tan
    I have 3 relevant tables. User, Orders, and Viewables The idea is that each User has got many Orders, but at the same time, each User can View specific other Orders that belong to other Users. So Viewables has the attributes of user_id and order_id. Orders has a :has_many :Users, :through => :viewables Is it possible to do a find through an Order's view? So something like @viewable_orders = Orders.find(:all, :conditions = ["Viewable.user_id=?",1]) To get a list of Orders which are viewable by user_id=1. (This doesn't work, else I won't be asking. :( ) The idea being that I can do something like a sidebar where the current user (the logged-in one) can view a list of other people's orders that he can view. For example Three other Users who have some Orders that he can view should be eventually displayed like this: Jack (2) Basic Order (registry_id: 1) New Order (registry_id: 29) Amy (4) Short Order (registry_id: 12) Jill (5) Hardware Order (14) Pink Order (17) Software Order (76) (The number in brackets are the respective user_id or registry_id) So to find the list of all of the orders that the current user can find (assuming user_id of the current user is 1), would be found by doing @viewable_orders = Viewable.find(:all, :conditions => ["user_id=?", 1]) And that would give me the collection of the above 6 registries. Now, the easiest way to do this, is for me to just have a list of + Jill's Hardware Order + Jill's Pink Order + Amy's Short Order + etc But that gets ugly for long lists. Thanks!

    Read the article

  • reading dbase file without standard .dbf extension.

    - by Nathan W
    I am trying to read a file which is just a dbase file but without the standard extension the file is something like: Test.Dat I am using this block of code to try and read the file: string ConnectionString = @"Provider=Microsoft.Jet.OLEDB.4.0; Data Source=C:\Temp;Extended Properties=dBase III"; OleDbConnection dBaseConnection = new OleDbConnection(ConnectionString); dBaseConnection.Open(); OleDbDataAdapter oDataAdapter = new OleDbDataAdapter("SELECT * FROM Test", ConnectionString); DataSet oDataSet = new DataSet(); oDataAdapter.Fill(oDataSet);//I get the error right here... DataTable oDataTable = oDataSet.Tables[0]; foreach (DataRow dr in oDataTable.Rows) { Console.WriteLine(dr["Name"]); } Of course this crashes because it can't find the dbase file called test, but if I rename the file to Test.dbf it works fine. I can't really rename the file all the time because a third party application uses it as its file format. Does anyone know a way to read a dbase file without a standard extension in C#. Thanks.

    Read the article

  • Execute SQL on CSV files via JDBC

    - by Markos Fragkakis
    Dear all, I need to apply an SQL query to CSV files (comma-separated text files). My SQL is predefined from another tool, and is not eligible to change. It may contain embedded selects and table aliases in the FROM part. For my task I have found two open-source (this is a project requirement) libraries that provide JDBC drivers: CsvJdbc XlSQL JBoss Teiid Create an Apache Derby DB, load all CSVs as tables and execute the query. These are the problems I encountered: it does not accept the syntax of the SQL (it uses internal selects and table aliases). Furthermore, it has not been maintained since 2004. I could not get it to work, as it has as dependency a SAX Parser that causes exception when parsing other documents. Similarly, no change since 2004. Have not checked if it supports the syntax, but seems like an overhead. It needs several entities defines (Virtual Databases, Bindings). From the mailing list they told me that last release supports runtime creation of required objects. Has anyone used it for such simple task (normally it can connect to several types of data, like CSV, XML or other DBS and create a virtual, unified one)? Can this even be done easily? From the 4 things I considered/tried, only 3 and 4 seem to me viable. Any advice on these, or any other way in which I can query my CSV files? Cheers

    Read the article

  • Corrupted mysql table, cause crash in mysql.h (c++)

    - by Francesco
    i've created a very simple mysql class in c+, but when happen that mysql crash , indexes of tables become corrupted, and all my c++ programs crash too because seems that are unable to recognize corrupted table and allowing me to handle the issue .. Q_RES = mysql_real_query(MY_mysql, tmp_query.c_str(), (unsigned int) tmp_query.size()); if (Q_RES != 0) { if (Q_RES == CR_COMMANDS_OUT_OF_SYNC) cout << "errorquery : CR_COMMANDS_OUT_OF_SYNC " << endl; if (Q_RES == CR_SERVER_GONE_ERROR) cout << "errorquery : CR_SERVER_GONE_ERROR " << endl; if (Q_RES == CR_SERVER_LOST) cout << "errorquery : CR_SERVER_LOST " << endl; LAST_ERROR = mysql_error(MY_mysql); if (n_retrycount < n_retry_limit) { // RETRY! n_retrycount++; sleep(1); cout << "SLEEP - query retry! " << endl; ping(); return select_sql(tmp_query); } return false; } MY_result = mysql_store_result(MY_mysql); B_stored_results = true; cout << "b8" << endl; LAST_affected_rows = (mysql_num_rows(MY_result) + 1); // coult return -1 cout << "b8-1" << endl; the program terminate with a "segmentation fault" after doing the "b8" and before the "b8-1" , Q_RES have no issue even if the table is corrupted.. i would like to know if there is a way to recognize that the table have problems and so then i can run a mysql repair or mysql check .. thanks, Francesco

    Read the article

  • How do you deal with denormalization / secondary indexes in database sharding?

    - by Continuation
    Say I have a "message" table with 2 secondary indexes: "recipient_id" "sender_id" I want to shard the "message" table by "recipient_id". That way to retrieve all messages sent to a certain recipient I only need to query one shard. But at the same time, I want to be able to make a query that ask for all messages sent by a certain sender. Now I don't want to send that query to every single shard of the "message" table. One way to do this is to duplicate the data and have a "message_by_sender" table sharded by "sender_id". The problem with that approach is that every time a message has been sent, I need to insert the message into both "message" and "message_by_sender" tables. But what if after inserting into "message" the insertion into "message_by_sender" fail? In that case the message exists in "message" but not in "message_by_sender". How do I make sure that if a message exists in "message" then it also exists in "message_by_sender" without resorting to 2 phase commit? This must be a very common issue for anyone who shards their databases. How do you deal woth it?

    Read the article

  • Top 10 collection completion - a monster in-query formula in MySQL?

    - by Andrew Heath
    I've got the following tables: User Basic Data (unique) [userid] [name] [etc] User Collection (one to one) [userid] [game] User Recorded Plays (many to many) [userid] [game] [scenario] [etc] Game Basic Data (unique) [game] [total_scenarios] I would like to output a table that shows the collection play completion percentage for the Top 10 users in descending order of %: Output Table [userid] [collection_completion] 3 95% 1 81% 24 68% etc etc In my mind, the calculation sequence for ONE USER is: grab user's total owned scenarios from User Collection joined with Game Basic Data and COUNT(gbd.total_scenarios) grab all recorded plays by COUNT(DISTINCT scenario) for that user Divide all recorded plays by total owned scenarios So that's 2 queries and a little PHP massage at the end. For a list of users sorted by completion percentage things get a little more complicated. I figure I could grab all users' collection totals in one query, and all users recorded plays in another, and then do the calcs and sort the final array in PHP, but it seems like overkill to potentially be doing all that for 1000+ users when I only ever want the Top 10. Is there a wicked monster query in MySQL that could do all that and LIMIT 10? Or is sticking with PHP handling the bulk of the work the way to go in this case?

    Read the article

  • Parsing/Tokenizing a String Containing a SQL Command

    - by Alan Storm
    Are there any open source libraries (any language, python/PHP preferred) that will tokenize/parse an ANSI SQL string into its various components? That is, if I had the following string SELECT a.foo, b.baz, a.bar FROM TABLE_A a LEFT JOIN TABLE_B b ON a.id = b.id WHERE baz = 'snafu'; I'd get back a data structure/object something like //fake PHPish $results['select-columns'] = Array[a.foo,b.baz,a.bar]; $results['tables'] = Array[TABLE_A,TABLE_B]; $results['table-aliases'] = Array[a=TABLE_A, b=TABLE_B]; //etc... Restated, I'm looking for the code in a database package that teases the SQL command apart so that the engine knows what to do with it. Searching the internet turns up a lot of results on how to parse a string WITH SQL. That's not what I want. I realize I could glop through an open source database's code to find what I want, but I was hoping for something a little more ready made, (although if you know where in the MySQL, PostgreSQL, SQLite source to look, feel free to pass it along) Thanks!

    Read the article

< Previous Page | 328 329 330 331 332 333 334 335 336 337 338 339  | Next Page >