Search Results

Search found 17634 results on 706 pages for 'django multi db'.

Page 633/706 | < Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >

  • Best practises for Magento Deployment

    - by Spongeboy
    I am looking setting up a deployment process for a highly customised Magento site, and was wondering how other people do this. I will be setting up dev, UAT and prod environments. All the Magento files will be in source control (SVN). At this stage, I can't see any requirements for changing the DB, so the 3 databases will be manually maintained. Specifically, How do you apply Magento upgrades? (Individually in each env, or on dev then roll out, or just give up on upgrades?) What files/folders do leave alone in each environment (e.g. magento/app/etc/local.xml) Do you restrict developers to editing specific files/folders? Do you restrict theme designers to editing specific files/folders? How do you manage database changes? Theme Designer Files/Folders Designers can restricted to editing the following folders- app/design/frontend/your_interface/your_theme/layout/ app/design/frontend/your_interface/your_theme/template/ app/design/frontend/your_interface/your_theme/locale/ skin/frontend/your_interface/your_theme/ Extension Developer Files/Folders Extension developers can edit the following folders/files- /app/code/local /app/etc/modules/<Namespace>_<Module>.xml Database environment management As the store's base URL is stored in the database, you cannot just copy databases between environments. Options include- Overriding the base url in php. Blog article on setting up dev and staging databases Changing the base url in the database after copying. (Where is this stored?) Doing a MySQLDump or backup, then doing a replace on the URL in the SQL file.

    Read the article

  • RestSharp post object to WCF

    - by steve
    Im having an issue posting an object to my wcf rest webservice. On the WCF side I have the following: [WebInvoke(UriTemplate = "", Method = "POST")] public void Create(myObject object) { //save some stuff to the db } When im debugging this never gets hit - it does however get hit when I remove the parameter so im guessing ive done something wrong on the restSharp side of things. Heres my code for that part: var client = new RestClient(ApiBaseUri); var request = new RestRequest(Method.POST); request.RequestFormat = DataFormat.Xml; request.AddBody(myObject); var response = client.Execute(request); Am I doing this wrong? How can the WCF side see my object? What way should I be making the reqest? Or should I be handling it differently WCF side? Things ive tried: request.AddObject(myObject); and request.AddBody(request.XmlSerialise.serialise(myObject)); Any help and understanding in what could possibly be wrong would be much appreciated. Thanks.

    Read the article

  • How to insert records in master/detail relationship

    - by croceldon
    I have two tables: OutputPackages (master) |PackageID| OutputItems (detail) |ItemID|PackageID| OutputItems has an index called 'idxPackage' set on the PackageID column. ItemID is set to auto increment. Here's the code I'm using to insert masters/details into these tables: //fill packages table for i := 1 to 10 do begin Package := TfPackage(dlgSummary.fcPackageForms.Forms[i]); if Package.PackageLoaded then begin with tblOutputPackages do begin Insert; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Package.Title; FieldByName('Total').AsCurrency := Package.Total; Post; end; //fill items table for ii := 1 to 10 do begin Item := TfPackagedItemEdit(Package.fc.Forms[ii]); if Item.Activated then begin with tblOutputItems do begin Append; FieldByName('PackageID').AsInteger := Package.ourNum; FieldByName('Description').AsString := Item.Description; FieldByName('Comment').AsString := Item.Comment; FieldByName('Price').AsCurrency := Item.Price; Post; //this causes the primary key exception end; end; end; end; This works fine as long as I don't mess with the MasterSource/MasterFields properties in the IDE. But once I set it, and run this code I get an error that says I've got a duplicate primary key 'ItemID'. I'm not sure what's going on - this is my first foray into master/detail, so something may be setup wrong. I'm using ComponentAce's Absolute Database for this project. How can I get this to insert properly? Update Ok, I removed the primary key restraint in my db, and I see that for some reason, the autoincrement feature of the OutputItems table isn't working like I expected. Here's how the OutputItems table looks after running the above code: ItemID|PackageID| 1 |1 | 1 |1 | 2 |2 | 2 |2 | I still don't see why all the ItemID values aren't unique.... Any ideas?

    Read the article

  • Undefined method `add' on a cucumber step that usually works.

    - by Josiah Kiehl
    I have a path defined: when /the admin home\s?page/ "/admin/" I have scenario that is passing: Scenario: Let admins see the admin homepage Given "pojo" is logged in And "pojo" is an "admin" And I am on the admin home page Then I should see "Hi there." And I have a scenario that is failing: Scenario: Review flagged photo Given "pojo" is logged in And "pojo" is an "admin" ...bunch of steps that create stuff in the database... And I am on the admin home page Then ... the rest of the steps The step that fails in the second one is "And I am on the admin home page" which passes just fine in the first scenario. Here's the error I get: And I am on the admin home page # features/step_definitions/web_steps.rb:18 undefined method `add' for {}:Hash (NoMethodError) ./app/controllers/admin_controller.rb:13:in `index' ./app/controllers/admin_controller.rb:11:in `each' ./app/controllers/admin_controller.rb:11:in `index' /usr/lib/ruby/1.8/benchmark.rb:308:in `realtime' ./features/step_definitions/web_steps.rb:19:in `/^(?:|I )am on (.+)$/' features/admin.feature:52:in `And I am on the admin home page' This is very odd... why would it be fine in the first case, and not in the second where the only difference are a bunch of steps that create records in the db? [edit] Here's the add stuff to database step: Given /^there is a "([^\"]*)" with the following:$/ do |model, table| model.constantize.create!(table.rows_hash) end

    Read the article

  • Is it possible to execute a function in Mongo that accepts any parameters?

    - by joshua.clayton
    I'm looking to write a function to do a custom query on a collection in Mongo. Problem is, I want to reuse that function. My thought was this (obviously contrived): var awesome = function(count) { return function() { return this.size == parseInt(count); }; } So then I could do something along the lines of: db.collection.find(awesome(5)); However, I get this error: error: { "$err" : "error on invocation of $where function: JS Error: ReferenceError: count is not defined nofile_b:1" } So, it looks like Mongo isn't honoring scope, but I'm really not sure why. Any insight would be appreciated. To go into more depth of what I'd like to do: A collection of documents has lat/lng values, and I want to find all documents within a concave or convex polygon. I have the function written but would ideally be able to reuse the function, so I want to pass in an array of points composing my polygon to the function I execute on Mongo's end. I've looked at Mongo's geospatial querying and it currently on supports circle and box queries - I need something more complex.

    Read the article

  • Finding the right terminology for a dictionary table

    - by Karl Forner
    My concern is about what I currently call "dictionary tables", that are database tables containing a list of controlled vocabulary. Let's use an example: Suppose you have a table User containing fields: user_id : primary key first_name last_name user_type_id : foreign key to the UserType table and another table UserType with just two fields: user_type_id : primary key name : the name/value of a particular type of user. For instance, the UserType table may contain (1, Administrator), (2, PowerUser), (3, Normal)... My question is: what is the canonical term for a table like UserType, that only contains a list of (dictinct) words. I want to publish some code that help managing this kind of tables, but first I have to name them ! Thanks for your help. Current state of thought: For now I feel Lookup Tables is a good term. It is also used with the same meaning in these posts: http://dbix-class.35028.n2.nabble.com/RFC-Component-for-Lookup-tables-td3504085.html http://tonyandrews.blogspot.de/2004/10/otlt-and-eav-two-big-design-mistakes.html Lookup Tables Best Practices: DB Tables... or Enumerations The only problem is that lookup table is also sometimes used to name a junction table.

    Read the article

  • When to Store Temporary Values in Hidden Field vs. Session vs. Database?

    - by viatropos
    I am trying to build a simple OpenID login panel similar to how Stack Overflow's works. The goal is: User clicks OpenID/Oauth provider OpenID/Oauth stuff happens, we end up with the result (already made that) Then we want to confirm that the user wants to actually create a new account (vs. associating account with another OpenID account). In StackOverflow, they keep a hidden field on a form that looks like this: <form action="/users/openidconfirm" method="post"> <p>This is an OpenID we haven't seen on Stack Overflow before:</p> <p class="openid-identifier">https://me.yahoo.com/a/some-hash</p> <p>Do you want to associate this OpenID with your Stack Overflow account?</p> <div> <input type="hidden" name="fkey" value="9792ab2zza1q2a4ac414casdfa137eafba7"> <input type="hidden" name="s" value="c1a3q133-11fa-49r0-a7bz-da19849383218"> <input type="submit" value="Associate OpenID"> <input type="button" value="Cancel" onclick="window.location.href = 'http://stackoverflow.com/users/169992/viatropos?s=c1a3q133-11fa-49r0-a7bz-da19849383218'"> </div> </form> Initial question is, what are those hashes fkey and s? Not that I really care what these specific hashes are, but what it seems like is happening is they have processed the openid response and saved it to the DB in a temporary object or something, and from there they generate these keys, because they don't look like Oauth keys to me. Main situation is: after I have processed OpenID/Oauth responses, I don't yet want to create a new user/account until the user submits the "confirm" form. Should I store the keys and tokens temporarily in a "Confirm" form like this? Or is there a better way? It seems that using a temp database object would be a lot of work to manage properly. Thanks for the help. Lance

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • SSIS - Bulk Update at Database Field Level

    - by Adam
    Hello, Here's our mission: Receive files from clients. Each file contains anywhere from 1 to 1,000,000 records. Records are loaded to a staging area and business-rule validation is applied. Valid records are then pumped into an OLTP database in a batch fashion, with the following rules: If record does not exist (we have a key, so this isn't an issue), create it. If record exists, optionally update each database field. The decision is made based on one of 3 factors...I don't believe it's important what those factors are. Our main problem is finding an efficient method of optionally updating the data at a field level. This is applicable across ~12 different database tables, with anywhere from 10 to 150 fields in each table (original DB design leaves much to be desired, but it is what it is). Our first attempt has been to introduce a table that mirrors the staging environment (1 field in staging for each system field) and contains a masking flag. The value of the masking flag represents the 3 factors. We've then put an UPDATE similar to... UPDATE OLTPTable1 SET Field1 = CASE WHEN Mask.Field1 = 0 THEN Staging.Field1 WHEN Mask.Field1 = 1 THEN COALESCE( Staging.Field1 , OLTPTable1.Field1 ) WHEN Mask.Field1 = 2 THEN COALESCE( OLTPTable1.Field1 , Staging.Field1 ) ... As you can imagine, the performance is rather horrendous. Has anyone tackled a similar requirement? We're a MS shop using a Windows Service to launch SSIS packages that handle the data processing. Unfortunately, we're pretty much novices at this stuff.

    Read the article

  • Creating search functionality with Laravel 4

    - by Mitch Glenn
    I am trying to create a way for users to search through all the products on a website. When they search for "burton snowboards", I only want the snowboards with the brand burton to appear in the results. But if they searched only "burton", then all products with the brand burton should appear. This is what I have attempted to write but isn't working for multiple reasons. Controller: public function search(){ $input = Input::all(); $v= Validator::make($input, Product::$rules); if($v->passes()) { $searchTerms = explode(' ', $input); $searchTermBits = array(); foreach ($searchTerms as $term) { $term = trim($term); if (!empty($term)){ $searchTermBits[] = "search LIKE '%$term%'"; } } $result = DB::table('products') ->select('*') ->whereRaw(". implode(' AND ', $searchTermBits) . ") ->get(); return View::make('layouts/search', compact('result')); } return Redirect::route('/'); } I am trying to recreate the first solution given for this stackoverflow.com problem The first problem I have identified is that i'm trying to explode the $input, but it's already an array. So i'm not sure how to go about fixing that. And the way I have written the ->whereRaw(". implode(' AND ', $searchTermBits) . "), i'm sure isn't correct. I'm not sure how to fix these problems though, any insights or solutions will be greatly appreciated.

    Read the article

  • Return Double from Boost thread

    - by Benedikt Wutzi
    Hi I have an Boost thread which should return a double. The function looks like this: void analyser::findup(const double startwl, const double max, double &myret){ this->data.begin(); for(int i = (int)data.size() ; i >= 0;i--){ if(this->data[i].lambda > startwl){ if(this->data[i].db >= (max-30)) { myret = this->data[i+1].lambda; std::cout <<"in thread " << myret << std::endl; return; } } } } this function is called by another function: void analyser::start_find_up(const double startwl, const double max){ double tmp = -42.0; boost::thread up(&analyser::findup,*this, startwl,max,tmp); std::cout << "before join " << tmp << std::endl; up.join(); std::cout << "after join " << tmp << std::endl; } Anyway I've tried and googled almost anything but i can't get it to return a value. The output looks like this right now. before join -42 in thread 843.487 after join -42 Thanks for any help.

    Read the article

  • atk4 advanced crud?

    - by thindery
    I have the following tables: -- ----------------------------------------------------- -- Table `product` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `product` ( `id` INT NOT NULL AUTO_INCREMENT , `productName` VARCHAR(255) NULL , `s7location` VARCHAR(255) NULL , PRIMARY KEY (`id`) ) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `pages` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `pages` ( `id` INT NOT NULL AUTO_INCREMENT , `productID` INT NULL , `pageName` VARCHAR(255) NOT NULL , `isBlank` TINYINT(1) NULL , `pageOrder` INT(11) NULL , `s7page` INT(11) NULL , PRIMARY KEY (`id`) , INDEX `productID` (`productID` ASC) , CONSTRAINT `productID` FOREIGN KEY (`productID` ) REFERENCES `product` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; -- ----------------------------------------------------- -- Table `field` -- ----------------------------------------------------- CREATE TABLE IF NOT EXISTS `field` ( `id` INT NOT NULL AUTO_INCREMENT , `pagesID` INT NULL , `fieldName` VARCHAR(255) NOT NULL , `fieldType` VARCHAR(255) NOT NULL , `fieldDefaultValue` VARCHAR(255) NULL , PRIMARY KEY (`id`) , INDEX `id` (`pagesID` ASC) , CONSTRAINT `pagesID` FOREIGN KEY (`pagesID` ) REFERENCES `pages` (`id` ) ON DELETE NO ACTION ON UPDATE NO ACTION) ENGINE = InnoDB; I have gotten CRUD to work on the 'product' table. //addproduct.php class page_addproduct extends Page { function init(){ parent::init(); $crud=$this->add('CRUD')->setModel('Product'); } } This works. but I need to get it so that when a new product is created it basically allows me to add new rows into the pages and field tables. For example, the products in the tables are a print product(like a greeting card) that has multiple pages to render. Page 1 may have 2 text fields that can be customized, page 2 may have 3 text fields, a slider to define text size, and a drop down list to pick a color, and page 3 may have five text fields that can all be customized. All three pages (and all form elements, 12 in this example) are associated with 1 product. So when I create the product, could i add a button to create a page for that product, then within the page i can add a button to add a new form element field? I'm still somewhat new to this, so my db structure may not be ideal. i'd appreciate any suggestions and feedback! Could someone point me toward some information, tutorials, documentation, ideas, suggestions, on how I can implement this?

    Read the article

  • Complex LINQ paging algorithm

    - by sharepointmonkey
    We have a list of projects that may or may not have a collection of subprojects. Our report needs to contain all the projects except those that are the parent project of a subproject. I need to page this into pages of, say, 25 rows. But if subprojects appear on that page then ALL the subprojects of that project must appear on the same page. So more than 25 items may appear if necessary. I've got as far as var pagedProjects = db.Projects.Where(x => !x.SubProjects.Any()).Skip( (pageNo -1) * pageSize).Take(pageSize); Obviously, this fails the second part of the requirements. As a further pain in the arse, I need to have a pager control on the report. So I'll need to be able to calculate the total number of pages. I could loop through the whole table of projects but the performance will suffer. Can anybody come up with a paged solution? EDIT - I should probably mention that SubProjects joins back onto Projects via a selfreferencing foreign key so the whole lot comes back as an IQueryable<Project>.

    Read the article

  • Long running operations (threads) in a web (asp.net) environment

    - by rrejc
    I have an asp.net (mvc) web site. As the part of the functions I will have to support some long running operations, for example: Initiated from user: User can upload (xml) file to the server. On the server I need to extract file, do some manipulation (insert into the db) etc... This can take from one minute to ten minutes (or even more - depends on file size). Of course I don't want to block the request when the import is running , but I want to redirect user to some progress page where he will have a chance to watch the status, errors or even cancel the import. This operation will not be frequently used, but it may happen that two users at the same time will try to import the data. It would be nice to run the imports in parallel. At the beginning I was thinking to create a new thread in the iis (controller action) and run the import in a new thread. But I am not sure if this is a good idea (to create working threads on a web server). Should I use windows services or any other approach? Initiated from system: - I will have to periodically update lucene index with the new data. - I will have to send mass emails (in the future). Should I implement this as a job in the site and run the job via Quartz.net or should I also create a windows service or something? What are the best practices when it comes to running site "jobs"? Thanks!

    Read the article

  • Good Starting Points for Optimizing Database Calls in Ruby on Rails?

    - by viatropos
    I have a menu in Rails which grabs a nested tree of Post models, each which have a Slug model associated via a polymorphic association (using the friendly_id gem for slugs and awesome_nested_set for the tree). The database output in development looks like this (here's the full gist): SQL (0.4ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 39) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 40 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 SQL (0.3ms) SELECT COUNT(*) AS count_id FROM "posts" WHERE ("posts".parent_id = 40) CACHE (0.0ms) SELECT "posts".* FROM "posts" WHERE ("posts"."id" = 13) LIMIT 1 CACHE (0.0ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 13 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 Slug Load (0.4ms) SELECT "slugs".* FROM "slugs" WHERE ("slugs".sluggable_id = 41 AND "slugs".sluggable_type = 'Post') ORDER BY id DESC LIMIT 1 ... Rendered shared/_menu.html.haml (907.6ms) What are some quick things I should always do to optimize this from the start (easy things)? Some things I'm thinking now are: Can Rails 3 eager load the whole Post tree + associated Slugs in one DB call? Can I do that easily with named scopes or custom SQL? What is best practice in this situation? Not really thinking about memcached in this situation as that can be applied to much more than just this.

    Read the article

  • Ajax function partially fails when alert removed

    - by YsoL8
    Hello. I have a problem in the following code: //quesry the db for image information function queryDB (parameters) { var parameters = "p="+parameters; alert ("hello"); if (window.XMLHttpRequest) {// code for IE7+, Firefox, Chrome, Opera, Safari xmlhttp=new XMLHttpRequest(); } else {// code for IE6, IE5 xmlhttp=new ActiveXObject("Microsoft.XMLHTTP"); } xmlhttp.onreadystatechange=function() { if (xmlhttp.readyState==4 && xmlhttp.status==200) { // use the info alert (xmlhttp.responseText); } } xmlhttp.open("POST", "js/imagelist.php", true); //Send the proper header information along with the request xmlhttp.setRequestHeader("Content-type", "application/x-www-form-urlencoded"); xmlhttp.setRequestHeader("Content-length", parameters.length); xmlhttp.setRequestHeader("Connection", "close"); xmlhttp.send(parameters); } When I remove the alert statement 4 lines down I hit problems. This function is being called by a loop, and without the alert, I only get results for the last value sent to the statement. With it, I get everything I was expecting and really I'm at a loss to know why. I've heard that this may be a timing issue as I'm sending new requests before the old one is finished. I also heard polling being mentioned, but I can't find any information detailed enough. I'm new to synchronous services and I'm not really aware of the issues.

    Read the article

  • python mongokit Connection() AssertionError

    - by zalew
    just installed mongokit and can't figure out why I get AssertionError python console: >>> from mongokit import Connection >>> c = Connection() Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/usr/local/lib/python2.6/dist-packages/mongokit-0.5.3-py2.6.egg/mongokit/connection.py", line 35, in __init__ super(Connection, self).__init__(*args, **kwargs) File "build/bdist.linux-i686/egg/pymongo/connection.py", line 169, in __init__ File "build/bdist.linux-i686/egg/pymongo/connection.py", line 338, in __find_master File "build/bdist.linux-i686/egg/pymongo/connection.py", line 226, in __master File "build/bdist.linux-i686/egg/pymongo/database.py", line 220, in command File "build/bdist.linux-i686/egg/pymongo/collection.py", line 356, in find_one File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 485, in next File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 461, in _refresh File "build/bdist.linux-i686/egg/pymongo/cursor.py", line 429, in __send_message File "build/bdist.linux-i686/egg/pymongo/helpers.py", line 98, in _unpack_response AssertionError >>> mongodb console: Wed Mar 31 10:27:34 connection accepted from 127.0.0.1:60480 #30 Wed Mar 31 10:27:34 end connection 127.0.0.1:60480 db 1.5 pymongo 1.5 (tested also on 1.4.) mongokit 0.5.3 (also 0.5.2)

    Read the article

  • problem configure JBoss to work with JNDI

    - by Spiderman
    I am trying to bind connection to the DB using JNDI in my application that runs on JBoss. I did the following: I created the datasource file oracle-ds.xml filled it with the relevant xml elements: <datasources> <local-tx-datasource> <jndi-name>bilby</jndi-name> ... </local-tx-datasource> </datasources> and put it in the folder \server\default\deploy Added the relevant oracle jar file than in my application I performed: JndiObjectFactoryBean factory = new JndiObjectFactoryBean(); factory.setJndiName("bilby"); try{ factory.afterPropertiesSet(); dataSource = factory.getObject(); } catch(NamingException ne) { ne.printStackTrace(); } and this cause the error: javax.naming.NameNotFoundException: bilby not bound then in the output after this error occured I saw the line: 18:37:56,560 INFO [ConnectionFactoryBindingService] Bound ConnectionManager 'jb oss.jca:service=DataSourceBinding,name=bilby' to JNDI name 'java:bilby' So what is my configuration problem? I think that it may be that JBoss first loads and runs the .war file of my application and only then it loads the oracle-ds.xml that contain my data-source definition. The problem is that they are both located in the same folder. Is there a way to define priority of loading them, or maybe this is not the problem at all. Any idea?

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

  • How to handle update events on a ASP.NET GridView?

    - by Bogdan M
    Hello, This may sound silly, but I need to find out how to handle an Update event from a GridView. First of all, I have a DataSet, where there is a typed DataTable with a typed TableAdapter, based on a "select all query", with auto-generated Insert, Update, and Delete methods. Then, in my aspx page, I have an ObjectDataSource related to my typed TableAdapter on Select, Insert, Update and Delete methods. Finnally, I have a GridView bound to this ObjectDataSource, with default Edit, Update and Cancel links. How should I implement the edit functionality? Should I have something like this? protected void GridView_RowEditing(object sender, GridViewEditEventArgs e) { using(MyTableAdapter ta = new MyTableAdapter()) { ta.Update(...); TypedDataTable dt = ta.GetRecords(); this.GridView.DataSource = dt; this.GridView.DataBind(); } } In this scenario, I have the feeling that I update some changes to the DB, then I retrive and bind all the data, and not only the modified parts. Is there any way to update only the DataSet, and this to update on his turn the DataBase and the GridView? I do not want to retrive all the data after a CRUD operations is performed, I just want to retrive the changes made. Thanks. PS: I'm using .NET 3.5 and VS 2008 with SP1.

    Read the article

  • ActiveRecord/sqlite3 column type lost in table view?

    - by duncan
    I have the following ActiveRecord testcase that mimics my problem. I have a People table with one attribute being a date. I create a view over that table adding one column which is just that date plus 20 minutes: #!/usr/bin/env ruby %w|pp rubygems active_record irb active_support date|.each {|lib| require lib} ActiveRecord::Base.establish_connection( :adapter => "sqlite3", :database => "test.db" ) ActiveRecord::Schema.define do create_table :people, :force => true do |t| t.column :name, :string t.column :born_at, :datetime end execute "create view clowns as select p.name, p.born_at, datetime(p.born_at, '+' || '20' || ' minutes') as twenty_after_born_at from people p;" end class Person < ActiveRecord::Base validates_presence_of :name end class Clown < ActiveRecord::Base end Person.create(:name => "John", :born_at => DateTime.now) pp Person.all.first.born_at.class pp Clown.all.first.born_at.class pp Clown.all.first.twenty_after_born_at.class The problem is, the output is Time Time String When I expect the new datetime attribute of the view to be also a Time or DateTime in the ruby world. Any ideas? I also tried: create view clowns as select p.name, p.born_at, CAST(datetime(p.born_at, '+' || '20' || ' minutes') as datetime) as twenty_after_born_at from people p; With the same result.

    Read the article

  • Eager loading OneToMany in Hibernate with JPA2

    - by pihentagy
    I have a simple @OneToMany between Person and Pet entities: @OneToMany(mappedBy="owner", cascade=CascadeType.ALL, fetch=FetchType.EAGER) public Set<Pet> getPets() { return pets; } I would like to load all Persons with associated Pets. So I came up with this (inside a test class): @RunWith(SpringJUnit4ClassRunner.class) @ContextConfiguration public class AppTest { @Test @Rollback(false) @Transactional(readOnly = false) public void testApp() { CriteriaBuilder qb = em.getCriteriaBuilder(); CriteriaQuery<Person> c = qb.createQuery(Person.class); Root<Person> p1 = c.from(Person.class); SetJoin<Person, Pet> join = p1.join(Person_.pets); TypedQuery<Person> q = em.createQuery(c); List<Person> persons = q.getResultList(); for (Person p : persons) { System.out.println(p.getName()); for (Pet pet : p.getPets()) { System.out.println("\t" + pet.getNick()); } } However, turning the SQL logging on shows, that it executes 3 queries (having 2 Persons in the DB). Hibernate: select person0_.id as id0_, person0_.name as name0_, person0_.sex as sex0_ from Person person0_ inner join Pet pets1_ on person0_.id=pets1_.owner_id Hibernate: select pets0_.owner_id as owner3_0_1_, pets0_.id as id1_, pets0_.id as id1_0_, pets0_.nick as nick1_0_, pets0_.owner_id as owner3_1_0_ from Pet pets0_ where pets0_.owner_id=? Hibernate: select pets0_.owner_id as owner3_0_1_, pets0_.id as id1_, pets0_.id as id1_0_, pets0_.nick as nick1_0_, pets0_.owner_id as owner3_1_0_ from Pet pets0_ where pets0_.owner_id=? Any tips? Thanks Gergo

    Read the article

  • Do I really need bindParam?

    - by sandelius
    Hi there! I'm trying to do a little PDO CRUD to learn some PDO. I have a question about bindParam. Here's my update method right now: public static function update($conditions = array(), $data = array(), $table = '') { self::instance(); // Late static bindings (PHP 5.3) $table = ($table === '') ? self::table() : $table; // Check which data array we want to use $values = (empty($data)) ? self::$_fields : $data; $sql = "UPDATE $table SET "; foreach ($values as $f => $v) { $sql .= "$f = ?, "; } // let's build the conditions self::build_conditions($conditions); // fix our WHERE, AND, OR, LIKE conditions $extra = self::$condition_string; // querystring $sql = rtrim($sql, ', ') . $extra; // let's merge the arrays into on $v_val = array_values($values); $c_val = array_values($conditions); $array = array_merge($v_val, self::$condition_array); $stmt = self::$db->prepare($sql); return $stmt->execute($array); } in my "self::$condition_array" I get all the right values from the ?. SO the query looks like this: UPDATE table SET this = ?, another = ? WHERE title = ? AND time = ? as you can see I dont use bindParams instead I pass the right values in the right order ($array) directly into the execute($array) method. This works like a charm BUT is it safe not use use bindParam here? If not then how can I do it? Thanks from Sweden Tobias

    Read the article

  • Algorithm for scoring user activity

    - by ManBugra
    I have an application where users can: Write reviews about products Add comments to products Up / Down vote reviews Up / Down vote comments Every Up/Down vote is recorded in a db table. What i want to do now is to create a ranking of the most active users in the last 4 weeks. Of course good reviews should be weighted more than good comments. But also e.g. 10 good comments should be weighted more than just one good review. Example: // reviews created in recent 4 weeks //format: [ upVoteCount, downVoteCount ] var reviews = [ [120,23], [32,12], [12,0], [23,45] ]; // comments created in recent 4 weeks // format: [ upVoteCount, downVoteCount ] var comments = [ [1,2], [322,1], [0,0], [0,45] ]; // create weight vector // format: [ reviewWeight, commentsWeight ] var weight = [0.60, 0.40]; // signature: activties..., activityWeight var userActivityScore = score(reviews, comments, weight); ... update user table ... List<Users> users = "from users u order by u.userActivityScore desc"; How would a fair scoring function look like? How could an implementation of the score() function look like? How to add a weight g to the function so that reviews are weighted heavier? How would such a function look like if, for example, votes for pictures would be added?

    Read the article

  • SQL to CodeIgniter Array Missing Data Issue

    - by SamD
    $query = $this->db->query("SELECT t1.numberofbets, t1.profit, t2.seven_profit, t3.28profit, user.user_id, username, password, email, balance, user.date_added, activation_code, activated FROM user LEFT JOIN (SELECT user_id, SUM(amount_won) AS profit, count(tip_id) AS numberofbets FROM tip GROUP BY user_id) as t1 ON user.user_id = t1.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS seven_profit FROM tip WHERE date_settled > '$seven_daystime' GROUP BY user_id) as t2 ON user.user_id = t2.user_id LEFT JOIN (SELECT user_id, SUM(amount_won) AS 28profit FROM tip WHERE date_settled > '$twoeight_daystime' GROUP BY user_id) as t3 ON user.user_id = t3.user_id where activated = 1 GROUP BY user.user_id ORDER BY user.date_added DESC"); return $query->result_array(); The query works fine running it in phpMyAdmin and returns complete results (in image attached). However, printing the array in CodeIgniter, it has no value for one field ,seven_profit, where it is there in the SQL query ran in phpMyAdmin, just the discrepancy in this one field, from sql to php array... I just can’t see why, when printing the array, that one field, which should have value of 26, contains nothing? Any ideas? I changed the field name from starting with a number in attempt to fix it, but no difference. I know this is complex and looks horrible, any help or just people coming across something similar would be great to know about, thanks. Sam

    Read the article

< Previous Page | 629 630 631 632 633 634 635 636 637 638 639 640  | Next Page >