Search Results

Search found 10101 results on 405 pages for 'temporary tables'.

Page 356/405 | < Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >

  • Populating and Using Dynamic Classes in C#/.NET 4.0

    - by Bob
    In our application we're considering using dynamically generated classes to hold a lot of our data. The reason for doing this is that we have customers with tables that have different structures. So you could have a customer table called "DOG" (just making this up) that contains the columns "DOGID", "DOGNAME", "DOGTYPE", etc. Customer #2 could have the same table "DOG" with the columns "DOGID", "DOG_FIRST_NAME", "DOG_LAST_NAME", "DOG_BREED", and so on. We can't create classes for these at compile time as the customer can change the table schema at any time. At the moment I have code that can generate a "DOG" class at run-time using reflection. What I'm trying to figure out is how to populate this class from a DataTable (or some other .NET mechanism) without extreme performance penalties. We have one table that contains ~20 columns and ~50k rows. Doing a foreach over all of the rows and columns to create the collection take about 1 minute, which is a little too long. Am I trying to come up with a solution that's too complex or am I on the right track? Has anyone else experienced a problem like this? Create dynamic classes was the solution that a developer at Microsoft proposed. If we can just populate this collection and use it efficiently I think it could work.

    Read the article

  • Database schema for Product Properties

    - by Chemosh
    As so many people I'm looking for a Products /Product Properties database schema. I'm using Ruby on Rails and (Thinking) Sphinx for faceted searches. Requirements: Adding new product types and their options should not require a change to the database schema Support faceted searches using Sphinx. Solutions I've come across: (See Bill Karwin's answer) Option 1: Single Table Inheritance Not an option really. The table would contain way to many columns. Option 2: Class Table Inheritance Ruby on Rails caches the database schema on start-up which means a restart whenever a new type of product is introduced. If you have a size able product catalog this could mean hundreds of tables. Option 3: Serialized LOB Kills being able to do faceted searches without heavy application logic. Option 4: Entity-Attribute-Value For testing purposes, EAV worked fine. However it could quickly become a mess and a maintenance hell as you add more and more options (e.g. when an option increase the prices or delivery time). What option should I go with? What other solutions are out there? Is there a silver bullet (ha) I overlooked?

    Read the article

  • entity framework insert bug

    - by tmfkmoney
    I found a previous question which seemed related but there's no resolution and it's 5 months old so I've opened my own version. http://stackoverflow.com/questions/1545583/entity-framework-inserting-new-entity-via-objectcontext-does-not-use-existing-e When I insert records into my database with the following it works fine for a while and then eventually it starts inserting null values in the referenced field. This typically happens after I do an update on my model from the database although not always after I do an update. I'm using a MySQL database for this. I have debugged the code and the values are being set properly before the save event. They're just not getting inserted properly. I can always fix this issue by re-creating the model without touching any of my code. I have to recreate the entire model, though. I can't just dump the relevant tables and re-add them. This makes me think it doesn't have anything to do with my code but something with the entity framework. Does anyone else have this problem and/or solved it? using (var db = new MyModel()) { var stocks = from record in query let ticker = record.Ticker select new { company = db.Companies.FirstOrDefault(c => c.ticker == ticker), price = Convert.ToDecimal(record.Price), date_stamp = Convert.ToDateTime(record.DateTime) }; foreach (var stock in stocks) { if (stock.company != null) { var price = new StockPrice { Company = stock.company, price = stock.price, date_stamp = stock.date_stamp }; db.AddToStockPrices(price); } } db.SaveChanges(); }

    Read the article

  • ASP.NET SqlDataSource update and create FK reference

    - by William
    The short version: I have a grid view bound to a data source which has a SelectCommand with a left join in it because the FK can be null. On Update I want to create a record in the FK table if the FK is null and then update the parent table with the new records ID. Is this possible to do with just SqlDataSources? The detailed version: I have two tables: Company and Address. The column Company.AddressId can be null. On my ascx page I am using a SqlDataSource to select a left join of company and address and a GridView to display the results. By having my UpdateCommand and DeleteCommand of the SqlDataSource execute two statements separated by a semi-colon I am able to use the GridView's Edit and Delete functionality to update both table simultaneously. The problem I have is when the Company.AddressId is null. What I need to have happen is have the data source create a record in the Address table and then update the Company table with the new Address.ID then proceed with the update as usual. I would like to do this with just data sources if possible for consistency/simplicity sake. Is it possible to have my data source do this, or perhaps add a second data source to the page to handle some of this? Once I have that working I can probably figure out how to make it work with the InsertCommand as well but if you are on a roll and have an answer for how to make that fly as well feel free to provide it. Thanks.

    Read the article

  • Database abstraction/adapters for ruby

    - by Stiivi
    What are the database abstractions/adapters you are using in Ruby? I am mainly interested in data oriented features, not in those with object mapping (like active record or data mapper). I am currently using Sequel. Are there any other options? I am mostly interested in: simple, clean and non-ambiguous API data selection (obviously), filtering and aggregation raw value selection without field mapping: SELECT col1, col2, col3 = [val1, val2, val3] not hash of { :col1 = val1 ...} API takes into account table schemas 'some_schema.some_table' in a consistent (and working) way; also reflection for this (get schema from table) database reflection: get list of table columns, their database storage types and perhaps adaptor's abstracted types table creation, deletion be able to work with other tables (insert, update) in a loop enumerating selection from another table without requiring to fetch all records from table being enumerated Purpose is to manipulate data with unknown structure at the time of writing code, which is the opposite to object mapping where structure or most of the structure is usually well known. I do not need the object mapping overhead. What are the options, including back-ends for object-mapping libraries?

    Read the article

  • Can't diagnose my MySQL root user problem

    - by George Crawford
    Hi all, I have a problem with the MySQL root user in My MySQL setup, and I just can't for the life of me work out how to fix it. It seems that I have somehow messed up the root user, and my access to databases is now very erratic. For reference, I'm using MAMP on OS X to provide the MySQL server. I'm not sure how much that matters though - I'd guess that whatever I've done will require a command-line fix to solve it. I can start MySQL using MAMP as usual, and access databases using the 'standard' users I have created for my PHP apps. However, the root user, which I use in my MySQL GUI client, and also in phpMyAdmin, can only access the "information_schema" database, as well as two I have created manually, and presumably (and mistakenly) left permissions wide open for. My 15 or so other databases cannot be accessed my the root user. When I load up phpMyAdmin, the home screen says: "Create new database: No Privileges". I certainly did at some stage change my root user's password using the MAMP dialog. But I don't remember if I did anything else which might have caused this problem. I've tried changing the password again, and there seems to be no change in the issue. I've also tried resetting root password using the command line, including starting mysql manually with --skip-grant-tables then flushing privs, but again, nothing seems to fix the issue. I've come to the end of my ideas, and would very much appreciate some step-by-step advice and diagnosis from one of the experts here! Many thanks for your help.

    Read the article

  • SQL connection to database repeating

    - by user175084
    ok now i am using the SQL database to get the values from different tables... so i make the connection and get the values like this: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); SqlCommand sqlCmd = new SqlCommand("SELECT * FROM Machines", connection); SqlDataAdapter sqlDa = new SqlDataAdapter(sqlCmd); sqlCmd.Parameters.AddWithValue("@node", node); sqlDa.Fill(dt); connection.Close(); so this is one query on the page and i am calling many other queries on the page. So do i need to open and close the connection everytime...??? also if not this portion is common in all: DataTable dt = new DataTable(); SqlConnection connection = new SqlConnection(); connection.ConnectionString = ConfigurationManager.ConnectionStrings["XYZConnectionString"].ConnectionString; connection.Open(); can i like put it in one function and call it instead.. the code would look cleaner... i tried doing that but i get errors like: Connection does not exist in the current context. any suggestions??? thanks

    Read the article

  • Do all the HTML5 storage systems work together ?

    - by azera
    While there are a lot of good stuff about html5, one thing I don't get is the redondant storage mechanism, first there is localstorage and sessionstorage, which are key value stores, one is for one instance of the app ("one tab"), and the other works for all the instances of that application so they can share data. Both are saved when you close your browser and have a limited size (usually 5MB), that's great and everything would be nice if we stopped there. But then there is the "Web SQL Database", which has the same security system as the localstorage, the same size limit, the same everything except it works like/is sqlite, with tables and sql syntax and all of that. And the bummer is, they don't work on the same data at all ! This is not two way to access your data, this is really two storage for every html 5 app out there (not created by default yes, but still you see my point). What I would like to know is, is there a reason for both of this mechanisms to exist at the same time ? Or did they just look at sql and nosql movement to pick the best then went "screw it let's add both !" ? Why not implement local/session storage as a table inside web sql db ?

    Read the article

  • php and mysql site design question

    - by Jacksta
    I am trying to build a website with mysql and php. This is the first site I have attempted so I want to write a little plan and get some feedback. The site allows users to add some text in a text field as a “comment”. Once the comment has been entered into the site it is added to the database where it can be voted for by other users. When a new comment has been added to the database it needs to create a new page, e.g. www.xxxxx.com/commentname or www.xxxxxx.com/?id=99981. There will be a list of "Comments" in the database along with the number of votes for each comment. The home page will have two functions. 1) Allow users to add a "comment" 2) Display two tables, each with 20 rows containing most "popular comments" and "recent comments" Each comment will generate its one page where the comment will be displayed. Here users can read the comment and Vote for the comment if they wish. Please help me out by explaining how to do the following. -Generate a new page whenever a comment is added to the database -Add a vote to the vote count in the comment database. -Display the top 20 most popular comments as per number of votes.

    Read the article

  • ASP.NET MVC - Wrong redirecting, how to debug?

    - by Xorty
    I am stuck with redirecting problem in ASP.NET MVC project. I have mapped tables via LINQtoSQL and each has unique ID as primary key. I am implementing functionallity of 'CREATE'. Basically, after new value is added into SQL table (which means I pressed Save button), I want to be redirected to Details of this freshly added item. Here's little code how I am doing it : [AcceptVerbs(HttpVerbs.Post), Authorize] public ActionResult Create(Item item) { .... return RedirectToAction("Details", new { id = item.ItemID }); Trouble is, I am never redirected to Details view (I have Details.aspx view for items). When I check CallHierarchy in Visual Studio (2010 pro) the hierarchy is indeed little strange, like this : RedirectToAction(string,object) Calls To 'RedirectToAction' Create Calls To Create (no results) Calls From Create (methods of created instance. From there I'll get back to 'RedirectToAction' and to 'Calls to Create' and 'Calls From Create' etc. etc. - loop Edit Calls From 'RedirectToAction' Not supported I am looking for some tools or more specifically 'know how' (since VS probably has some tools) to debug this kind of situations. PS: rooting is default :"{controller}/{action}/{id}", Thanks

    Read the article

  • I have created a PHP script and I am lacking to extract the primary key, I have given flow below, pl

    - by Parth
    I am using MySQL DB, working for Joomla, My requirement is tracking the activity like insert/update/delete on any table and store it in another audit table using triggers, i.e. I am doing Auditing. DB's table structure: Few tables dont have any PK nor auto increment key Flow of my script is : I fetch out all table from DB. I check whether the table have any trigger or not. If yes then it moves to check nfor next table and so on. If it does'nt find any trigger then it creates the triggers for the table, such that, -it first checks if the table has any primary key or not(for inserting in Tracking audit table for every change made) if it has the primary key then it uses it further in creation of trigger. if it doesnt find any PK then it proceeds further in creating the trigger without inserting any id in audit table Now here, My problem is I need the PK every time so that I can record the id of any particular table in which the insert/update/delete is performed, so that further i can use this audit track table to replicate in production DB.. Now as I haave mentioned earlier that I am not available with PK/auto-incremented in some table, then what should I do get the particular id in which change is done? please guide me...GEEKS!!!

    Read the article

  • SQL Grouping with multiple joins combining results incorrectly

    - by Matt
    Hi I'm having trouble with my query combining records when it shouldn't. I have two tables Authors and Publications, they are related by Publication ID in a many to many relationship. As each author can have many publications and each publication has many Authors. I want my query to return every publication for a set of authors and include the ID of each of the other authors that have contributed to the publication grouped into one field. (I am working with mySQL) I have tried to picture it graphically below Table: authors Table:publications AuthorID | PublicationID PublicationID | PublicationName 1 | 123 123 | A 1 | 456 456 | B 2 | 123 789 | C 2 | 789 3 | 123 3 | 456 I want my result set to be the following AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,2,3 1 | 456 | B | 1,3 2 | 123 | A | 1,2,3 2 | 789 | C | 2 3 | 123 | A | 1,2,3 3 | 456 | B | 1,3 This is my query Select Author1.AuthorID, Publications.PublicationID, Publications.PubName, GROUP_CONCAT(TRIM(Author2.AuthorID)ORDER BY Author2.AuthorID ASC)AS 'AuthorsAll' FROM Authors AS Author1 LEFT JOIN Authors AS Author2 ON Author1.PublicationID = Author2.PublicationID INNER JOIN Publications ON Author1.PublicationID = Publications.PublicationID WHERE Author1.AuthorID ="1" OR Author1.AuthorID ="2" OR Author1.AuthorID ="3" GROUP BY Author2.PublicationID But it returns the following instead AuthorID | PublicationID | PublicationName | AllAuthors 1 | 123 | A | 1,1,1,2,2,2,3,3,3 1 | 456 | B | 1,1,3,3 2 | 789 | C | 2 It does deliver the desired output when there is only one AuhorID in the where statement. I have not been able to figure it out, does anyone know where i'm going wrong?

    Read the article

  • ASP.Net MVC - how can I easily serialize query results to a database?

    - by Mortanis
    I've been working on a little property search engine while I learn ASP.Net MVC. I've gotten the results from various property database tables and sorted them into a master generic property response. The search form is passed via Model Binding and works great. Now, I'd like to add pagination. I'm returning the chunk of properties for the current page with .Skip() and .Take(), and that's working great. I have a SearchResults model that has the paged result set and various other data like nextPage and prevPage. Except, I no longer have the original form of course to pass to /Results/2. Previously I'd have just hidden a copy of the form and done a POST each time, but it seems inelegant. I'd like to serialize the results to my MS SQL database and return a unique key for that results set - this also helps with a "Send this query to a friend!" link. Killing two birds with one stone. Is there an easy way to take an IQueryable result set that I have, serialize it, stick it into the DB, return a unique key and then reverse the process with said key? I'm using Linq to SQL currently on a MS SQL Express install, though in production it'll be on MS SQL 2008.

    Read the article

  • NSNotification vs. Delegate Protocols?

    - by jr
    I have an iPhone application which basically is getting information from an API (in XML, but maybe JSON eventually). The result objects are typically displayed in view controllers (tables mainly). Here is the architecture right now. I have NSOperation classes which fetch the different objects from the remote server. Each of these NSOperation classes, will take a custom delegate method which will fire back the resulting objects as they are parsed, and then finally a method when no more results are available. So, the protocol for the delegates will be something like: (void) ObjectTypeResult:(ObjectType *)result; (void) ObjectTypeNoMoreResults; I think the solution works well, but I do end up with a bunch of delegate protocols around and then my view controllers have to implement all these delegate methods. I don't think its that bad, but I'm always on the lookout for a better design. So, I'm thinking about using NSNotifications to remove the use of the delegates. I could include the object in the userInfo part of the notification and just post objects as received, and then a final event when no more are available. Then I could just have one method in each view controller to receive all the data, even when using multiple objects in one controller.† So, can someone share with me some pros/cons of each approach. Should I consider refactoring my code to use Events rather then the delegates? Is one better then the other in certain situations? In my scenario I'm really not looking to receive notifications in multiple places, so maybe the protocol based delegates are the way to go. Thanks!

    Read the article

  • Why is using a common-lookup table to restrict the status of entity wrong?

    - by FreshCode
    According to Five Simple Database Design Errors You Should Avoid by Anith Sen, using a common-lookup table to store the possible statuses for an entity is a common mistake. Why is this wrong? I disagree that it's wrong, citing the example of jobs at a repair service with many possible statuses that generally have a natural flow, eg.: Booked In Assigned to Technician Diagnosing problem Waiting for Client Confirmation Repaired & Ready for Pickup Repaired & Couriered Irreparable & Ready for Pickup Quote Rejected Arguably, some of these statuses can be normalised to tables like Couriered Items, Completed Jobs and Quotes (with Pending/Accepted/Rejected statuses), but that feels like unnecessary schema complication. Another common example would be order statuses that restrict the status of an order, eg: Pending Completed Shipped Cancelled Refunded The status titles and descriptions are in one place for editing and are easy to scaffold as a drop-down with a foreign key for dynamic data applications. This has worked well for me in the past. If the business rules dictate the creation of a new order status, I can just add it to OrderStatus table, without rebuilding my code.

    Read the article

  • Can I use foreign key restrictions to return meaningful UI errors with PHP

    - by Shane
    I want to start by saying that I am a big fan of using foreign keys and have a tendency to use them even on small projects to keep my database from being filled with orphaned data. On larger projects I end up with gobs of keys which end up covering upwards of 8 - 10 layers of data. I want to know if anyone could suggest a graceful way of handling 'expected errors' from the MySQL database in a way that I can construct meaningful messages for the end user. I will explain 'expected errors' with an example. Lets say I have a set of tables used for basic discussions: discussion questions responses users Hierarchically they would probably look something like this: -users --discussion ---questions ----responses When I attempt to delete a user the FKs will check discussions and if any discussion exist the deletion is restricted, deleting discussion checks questions, deleting questions checks responses. An 'expected error' in this case would be attempting to delete a user--unless they are newly created I can anticipate that one or more foreign keys will fail causing an error. What I WANT to do is to catch that error on deletion and be able to tell the end user something like 'We're sorry, but all discussions must be removed before you can delete this user...'. Now I know I can keep and maintain matching arrays in PHP and map specific errors to messages but that is messy and prone to becoming stagnant, or I could manually run a set of selects prior to attempting the deletion, but then I am doing just as much work as without using FKs. Any help here would be greatly appreciated, or if I am just looking at this completely wrong then please let me know. On a side note I generally use CodeIgniter for my application development, so if that would open up an avenue through that framework please consider that in your answers. Thanks in Advance

    Read the article

  • Foreign key relationships in Entity Framework

    - by Anders Svensson
    I'm trying to add an object created from Entity Data Model classes. I have a table called Users, which has turned into a User EDM class. And I also have a table Pages, which has become a Page EDM class. These tables have a foreign key relationship, so that each page is associated with many users. Now I want to be able to add a page, but I can't get how to do it. I get a nullreference exception on Users below. I'm still rather confused by all this, so I'm sure it's a simple error, but I just can't see how to do it. Also, by the way, the compiler requires that I set PageID in the object initializer, even though this field is set to be an automatic id in the table. Am I doing it right just setting it to 0, expecting it to be updated automatically in the table when saved, or how should I do that? Any help appreciated! The method in question: private Page GetPage(User currentUser) { string url = _request.ServerVariables["url"].ToLower(); var userPages = from p in _context.PageSet where p.Users.UserID == currentUser.UserID select p; var existingPage = userPages.FirstOrDefault(e => e.Url == url); //Could be combined with above, but hard to read? if (existingPage != null) return existingPage; Page page = new Page() { Count = 0, Url = _request.ServerVariables["url"].ToLower(), PageID = 0, //Only initial value, changed later? }; _context.AddToPageSet(page); page.Users.UserID = currentUser.UserID; //Here's the problem... return page; }

    Read the article

  • Binding value for NSTableView, but tooltip gets set as well

    - by Mark
    I've set up an NSTableView in Interface Builder to be populated from an NSArray. Each value of the array represents one row in the table. The value is bound correctly, but as a side effect, the table cell's tooltip is set to the string representation of the bound object. In my case, the NSArray contains NSDictiorany objects and the tooltip looks like it could be the [... description] output of that dictionary. Very ugly... I don't want the tooltip to be set at all. I have other tables that have plain NSString values bound to them and they don't have a tooltip set automatically. Is there some Interface Builder magic going on? I tried to start with a blank project - same problem. I should add that the table cell is a custom implementation of NSTextFieldCell that uses an NSButtonCell instance to draw an image and a label into the table. The values are retrieved from the dictionary bound as value. Why is the tooltip set when I only bind the "value" attribute? Thanks in advance!

    Read the article

  • How Optimize sql query make it faster

    - by user502083
    Hello every one : I have a very simple small database, 2 of tables are: Node (Node_ID, Node_name, Node_Date) : Node_ID is primary key Citation (Origin_Id, Target_Id) : PRIMARY KEY (Origin_Id, Target_Id) each is FK in Node Now I write a query that first find all citations that their Origin_Id has a specific date and then I want to know what are the target dates of these records. I'm using sqlite in python the Node table has 3000 record and Citation has 9000 records, and my query is like this in a function: def cited_years_list(self, date): c=self.cur try: c.execute("""select n.Node_Date,count(*) from Node n INNER JOIN (select c.Origin_Id AS Origin_Id, c.Target_Id AS Target_Id, n.Node_Date AS Date from CITATION c INNER JOIN NODE n ON c.Origin_Id=n.Node_Id where CAST(n.Node_Date as INT)={0}) VW ON VW.Target_Id=n.Node_Id GROUP BY n.Node_Date;""".format(date)) cited_years=c.fetchall() self.conn.commit() print('Cited Years are : \n ',str(cited_years)) except Exception as e: print('Cited Years retrival failed ',e) return cited_years Then I call this function for some specific years, But it's crazy slowwwwwwwww :( (around 1 min for a specific year) Although my query works fine, it is slow. would you please give me a suggestion to make it faster? I'd appreciate any idea about optimizing this query :) I also should mention that I have indices on Origin_Id and Target_Id, so the inner join should be pretty fast, but it's not!!!

    Read the article

  • how does MySQL implement the "group by"?

    - by user188916
    I read from the MySQL Reference Manual and find that when it can take use of index,it just do index scan,other it will create tmp tables and do things like filesort. And I also read from other article that the "Group By" result will sort by group by columns by default,if "order by null" clause added,it won't don filesort. The difference can be found from the "explain ..." clause. so my problem is:what is the difference between "group by" clause that with "order by null" and which doesn't have? I try to use profiling to see what mysql do on the background,and only see result like: result for group clause without order by null: |preparing | 0.000016 | | Creating tmp table | 0.000048 | | executing | 0.000009 | | Copying to tmp table | 0.000109 | **| Sorting result | 0.000023 |** | Sending data | 0.000027 | result for clause with "order by null": preparing | 0.000016 | | Creating tmp table | 0.000052 | | executing | 0.000009 | | Copying to tmp table | 0.000114 | | Sending data | 0.000028 | So I guess what MySQL do when the "order by null" added,it does not use filesort algorithm,maybe when it creates the tmp table,it uses index as well,and then use the index to do group by operation,when completed,it just read result from the table rows and does not sort the result. But my original opinion is that MySQL can use quicksort to sort the items and then do group by,so the result will be sorted as well. Any opinion appreciated,thanks.

    Read the article

  • Why does grails use hsqldb when I ask for mysql?

    - by John
    I'm following the racetrack example from Jason Rudolph's book at InfoQ, using grails-1.2.1. I got up to the part where I was to switch from hsqldb to mysql. I think I've deleted every reference to hsqldb in the DataSource.groovy file, but I get an exception and the stack trace shows it's still using hsqldb. DataSource.groovy dataSource { boolean pooled = true String driverClassName = "com.mysql.jdbc.Driver" String url = "jdbc:mysql://localhost/dfpc2" String dbCreate = "create" String username = "dfpc2" String password = "dfpc2" dialect = org.hibernate.dialect.MySQL5InnoDBDialect } hibernate { cache.use_second_level_cache=true cache.use_query_cache=true cache.provider_class='net.sf.ehcache.hibernate.EhCacheProvider' } // environment specific settings environments { development { } test { } production { } } When I grails run-app it all starts up with no errors. I can navigate to the home page. But when I click on one of the links, I get a stack trace: java.sql.SQLException: Table not found in statement [select this_.id as id0_0_, this_.version as version0_0_, this_.name as name0_0_, this_.variant as variant0_0_ from domainObject this_ limit ?] at org.hsqldb.jdbc.Util.throwError(Unknown Source) at org.hsqldb.jdbc.jdbcPreparedStatement.<init>(Unknown Source) at org.hsqldb.jdbc.jdbcConnection.prepareStatement(Unknown Source) at dfpc2.domainObjectController$_closure2.doCall(script1269434425504953491149.groovy:13) at dfpc2.domainObjectController$_closure2.doCall(script1269434425504953491149.groovy) at java.lang.Thread.run(Thread.java:619) My mysql database shows no tables created. (I don't think groovy's connected to mysql yet.) Things I've checked: mysql-connector-java-5.1.6.jar is in lib directory. I've tried grails clean I tried putting the dataSource info in the development environment (I haven't graduated to test or prod yet), but it seemed to make no difference. The stdout shows I'm using development env. I've googled for solutions, but the only solution I've found is when people don't change the test or production environments.

    Read the article

  • Managing libraries and imports in a programming language

    - by sub
    I've created an interpreter for a stupid programming language in C++ and the whole core structure is finished (Tokenizer, Parser, Interpreter including Symbol tables, core functions, etc.). Now I have a problem with creating and managing the function libraries for this interpreter (I'll explain what I mean with that later) So currently my core function handler is horrible: // Simplified version myLangResult SystemFunction( name, argc, argv ) { if ( name == "print" ) { if( argc < 1 ) { Error('blah'); } cout << argv[ 0 ]; } else if ( name == "input" ) { if( argc < 1 ) { Error('blah'); } string res; getline( cin, res ); SetVariable( argv[ 0 ], res ); } else if ( name == "exit ) { exit( 0 ); } And now think of each else if being 10 times more complicated and there being 25 more system functions. Unmaintainable, feels horrible, is horrible. So I thought: How to create some sort of libraries that contain all the functions and if they are imported initialize themselves and add their functions to the symbol table of the running interpreter. However this is the point where I don't really know how to go on. What I wanted to achieve is that there is e.g.: an (extern?) string library for my language, e.g.: string, and it is imported from within a program in that language, example: import string myString = "abcde" print string.at( myString, 2 ) # output: c My problems: How to separate the function libs from the core interpreter and load them? How to get all their functions into a list and add it to the symbol table when needed? What I was thinking to do: At the start of the interpreter, as all libraries are compiled with it, every single function calls something like RegisterFunction( string namespace, myLangResult (*functionPtr) ); which adds itself to a list. When import X is then called from within the language, the list built with RegisterFunction is then added to the symbol table. Disadvantages that spring to mind: All libraries are directly in the interpreter core, size grows and it will definitely slow it down.

    Read the article

  • nHibernate storage of an object with self referencing many children and many parents

    - by AdamC
    I have an object called MyItem that references children in the same item. How do I set up an nhibernate mapping file to store this item. public class MyItem { public virtual string Id {get;set;} public virtual string Name {get;set;} public virtual string Version {get;set;} public virtual IList<MyItem> Children {get;set;} } So roughly the hbm.xml would be: <class name="MyItem" table="tb_myitem"> <id name="Id" column="id" type="String" length="32"> <generator class="uuid.hex" /> </id> <property name="Name" column="name" /> <property name="Version" column="version" /> <bag name="Children" cascade="all-delete-orphan" lazy="false"> <key column="children_id" /> <one-to-many class="MyItem" not-found="ignore"/> </bag> </class> This wouldn't work I don't think. Perhaps I need to create another class, say MyItemChildren and use that as the Children member and then do the mapping in that class? This would mean having two tables. One table holds the MyItem and the other table holds references from my item. NOTE: A child item could have many parents.

    Read the article

  • Combining two-part SQL query into one query

    - by user332523
    Hello, I have a SQL query that I'm currently solving by doing two queries. I am wondering if there is a way to do it in a single query that makes it more efficient. Consider two tables: Transaction_Entries table and Transactions, each one defined below: Transactions - id - reference_number (varchar) Transaction_Entries - id - account_id - transaction_id (references Transactions table) Notes: There are multiple transaction entries per transaction. Some transactions are related, and will have the same reference_number string. To get all transaction entries for Account X, then I would do SELECT E.*, T.reference_number FROM Transaction_Entries E JOIN Transactions T ON (E.transaction_id=T.id) where E.account_id = X The next part is the hard part. I want to find all related transactions, regardless of the account id. First I make a list of all the unique reference numbers I found in the previous result set. Then for each one, I can query all the transactions that have that reference number. Assume that I hold all the rows from the previous query in PreviousResultSet UniqueReferenceNumbers = GetUniqueReferenceNumbers(PreviousResultSet) // in Java foreach R in UniqueReferenceNumbers // in Java SELECT * FROM Transaction_Entries where transaction_id IN (SELECT * FROM Transactions WHERE reference_number=R Any suggestions how I can put this into a single efficient query?

    Read the article

  • Exit and rollback everything in script on error

    - by Jan W.
    Hey guys ! I'm in a bit of a pickle here. I have a TSQL script that does a lot of database structure adjustments but it's not really safe to just let it go through when something fails. to make things clear: using MS SQL 2005 it's NOT a stored procedure, just a script file (.sql) what I have is something in the following order BEGIN TRANSACTION ALTER Stuff GO CREATE New Stuff GO DROP Old Stuff GO IF @@ERROR != 0 BEGIN PRINT 'Errors Found ... Rolling back' ROLLBACK TRANSACTION RETURN END ELSE PRINT 'No Errors ... Committing changes' COMMIT TRANSACTION just to illustrate what I'm working with ... can't go into specifics now, the problem ... When I introduce an error (to test if things get rolled back), I get a statement that the ROLLBACK TRANSACTION could not find a corresponding BEGIN TRANSACTION. This leads me to believe that something when REALLY wrong and the transaction was already killed. what I also noticed is that the script didn't fully quit on error and thus DID try to execute every statement after the error occured. (I noticed this when new tables showed up when I wasn't expecting them because it should have rollbacked) any help in this department is welcome if more speficics are needed, ask! greetz

    Read the article

< Previous Page | 352 353 354 355 356 357 358 359 360 361 362 363  | Next Page >