Search Results

Search found 41365 results on 1655 pages for 'oracle database 11g advan'.

Page 659/1655 | < Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >

  • Trying to create fields based on a case statement

    - by dido
    I'm having some trouble with the query below. I am trying to determine if the "category" field is A, B or C and then creating a field based on the category. That field would sum up payments field. But I'm running into error saying "incorrect syntax near keyword As". I am creating this in a SQL View. Using SQL Server 2008 SELECT r.id, r.category CASE WHEN r.category = 'A' then SUM(r.payment) As A_payments WHEN r.category = 'B' then SUM(r.payment) As B_payments WHEN r.category = 'C' then SUM(r.payment) As C_payments END FROM r_invoiceTable As r GROUP BY r.id, r.category I have data where all of the above cases should be executed because the data that I have has A,B and C Sample Data- r_invoiceTable Id --- Category ---- Payment 222 A ---- 50 444 A ---- 30 111 B ---- 90 777 C ---- 20 555 C ---- 40 Desired Output A_payments = 80, B_payments = 90, C_payments = 60

    Read the article

  • Why should I abstract my data layer?

    - by Gazillion
    OOP principles were difficult for me to grasp because for some reason I could never apply them to web development. As I developed more and more projects I started understanding how some parts of my code could use certain design patterns to make them easier to read, reuse, and maintain so I started to use it more and more. The one thing I still can't quite comprehend is why I should abstract my data layer. Basically if I need to print a list of items stored in my DB to the browser I do something along the lines of: $sql = 'SELECT * FROM table WHERE type = "type1"';' $result = mysql_query($sql); while($row = mysql_fetch_assoc($result)) { echo '<li>'.$row['name'].'</li>'; } I'm reading all these How-Tos or articles preaching about the greatness of PDO but I don't understand why. I don't seem to be saving any LoCs and I don't see how it would be more reusable because all the functions that I call above just seem to be encapsulated in a class but do the exact same thing. The only advantage I'm seeing to PDO are prepared statements. I'm not saying data abstraction is a bad thing, I'm asking these questions because I'm trying to design my current classes correctly and they need to connect to a DB so I figured I'd do this the right way. Maybe I'm just reading bad articles on the subject :) I would really appreciate any advice, links, or concrete real-life examples on the subject!

    Read the article

  • How SqlDataAdapter works internally?

    - by tigrou
    I wonder how SqlDataAdapter works internally, especially when using UpdateCommand for updating a huge DataTable (since it's usually a lot faster that just sending sql statements from a loop). Here is some idea I have in mind : It creates a prepared sql statement (using SqlCommand.Prepare()) with CommandText filled and sql parameters initialized with correct sql types. Then, it loops on datarows that need to be updated, and for each record, it updates parameters values, and call SqlCommand.ExecuteNonQuery(). It creates a bunch of SqlCommand objects with everything filled inside (CommandText and sql parameters). Several SqlCommands at once are then batched to the server (depending of UpdateBatchSize). It uses some special, low level or undocumented sql driver instructions that allow to perform an update on several rows in a effecient way (rows to update would need to be provided using a special data format and a the same sql query (UpdateCommand here) would be executed against each of these rows).

    Read the article

  • Many-to-one relationship in SQLAlchemy

    - by Arrieta
    This is a beginner-level question. I have a catalog of mtypes: mtype_id name 1 'mtype1' 2 'mtype2' [etc] and a catalog of Objects, which must have an associated mtype: obj_id mtype_id name 1 1 'obj1' 2 1 'obj2' 3 2 'obj3' [etc] I am trying to do this in SQLAlchemy by creating the following schemas: mtypes_table = Table('mtypes', metadata, Column('mtype_id', Integer, primary_key=True), Column('name', String(50), nullable=False, unique=True), ) objs_table = Table('objects', metadata, Column('obj_id', Integer, primary_key=True), Column('mtype_id', None, ForeignKey('mtypes.mtype_id')), Column('name', String(50), nullable=False, unique=True), ) mapper(MType, mtypes_table) mapper(MyObject, objs_table, properties={'mtype':Relationship(MType, backref='objs', cascade="all, delete-orphan")} ) When I try to add a simple element like: mtype1 = MType('mtype1') obj1 = MyObject('obj1') obj1.mtype=mtype1 session.add(obj1) I get the error: AttributeError: 'NoneType' object has no attribute 'cascade_iterator' Any ideas?

    Read the article

  • How do I link ASP.NET membership/role users to tables in db?

    - by SnapConfig.com
    I am going to use forms authentication but I want to be able to link the asp.net users to some tables in the db for example If I have a class and students (as roles) I'll have a class students table. I'm planning to put in a Users table containing a simple int userid and ASP.NET username in there and put userid wherever I want to link the users. Does that sound good? any other options of modeling this? it does sound a bit convoluted?

    Read the article

  • New replicaset resident memory is larger than the existing sets

    - by eded
    From the mongodb tutorial of how to resync a set, I wipe all the files in /data/db and restart the mongod process to resync the data. Everything looks ok, I get the same number of documents as the existing two sets(primary and one secondary). However, when I check the memory on MMS. it shows me my new resynced set/mongod process has a different memory status value than the other two. For existing twos using db.serverStatus.mem shows like the following: "mem" : { "bits" : 64, "resident" : 239, "virtual" : 66348, "supported" : true, "mapped" : 32865, "mappedWithJournal" : 65730 } however, the new resynced set shows like: "mem" : { "bits" : 64, "resident" : 1239, "virtual" : 52447, "supported" : true, "mapped" : 25700, "mappedWithJournal" : 51400 } the resynced resident memory is 6-10 times more than the existing ones. I wouder if it is normal because all data comes in suddenly during the resyncing?? and even virtual and mapped value are different too. Can anyone explain?? thanks

    Read the article

  • Sybase PowerDesigner Change Many (Find/Replace/Convert) Data Item's Data Types

    - by Andy
    Hello, I have a relatively large Conceptual Data Model in PowerDesigner. After generating a Physical Data Model and seeing the DBMS data types, I need to update all of data types(NUMBER/TEXT) for each data item. I'd like to either do a find/replace within the Conceptual Data Model or somehow map to different data types when creating the Physical Data Model. Ex. Change the auto conversion of Text - Clob, to Text - NVARCHAR(20). Thanks!

    Read the article

  • Relation to multiple tables of different types for rating?

    - by Tronic
    i have a table structure like this Products Team Images and want to implement a rating/commenting-feature, where users can rate each entry of all tables. what's the best way to make a single rating table? e.g. a user votes a a product and a team entry, and it should be possible to get alle these entries from a single table. what kind of table-structure is best for this purpose? i hope, my questions is clear enough :/ thanks in advance!

    Read the article

  • In mySQL, Is it possible to SELECT from two tables and merge the columns?

    - by Travis
    If I have two tables in mysql that have similar columns... TABLEA id name somefield1 TABLEB id name somefield1 somefield2 How do I structure a SELECT statement so that I can SELECT from both tables simultaneously, and have the result sets merged for the columns that are the same? So for example, I am hoping to do something like... SELECT name, somefield1 FROM TABLEA, TABLEB WHERE name="mooseburgers"; ...and have the name, and somefield1 columns from both tables merged together in the result set. Thank-you for your help!

    Read the article

  • Significance of date 02/31/2157?

    - by dpatchery
    I work in a large scale IT support environment. Twice now we have seen an invalid date of 02/31/2157 being inserted in an Oracle DATE column. So far I have not been able to reproduce this problem, but it appears to be happening occasionally when a user attempts to save '00/00/0000' into the column. I believe the value is originating from a PowerBuilder DataWindow update. The application uses myriad libraries for all sorts of technologies, so this question may be a bit vague, but... Has anyone seen the date 02/31/2157 in some established library that Oracle could be defaulting to when some other invalid date is entered? Perhaps an end-of-time concept analogous to the beginning-of-time date of 1/1/1970?

    Read the article

  • How do I delete in Django? (mysql transactions)

    - by alex
    If you are familiar with Django, you know that they have a Authentication system with User model. Of course, I have many other tables that have a Foreign Key to this User model. If I want to delete this user, how do I architect a script (or through mysql itself) to delete every table that is related to this user? My only worry is that I can do this manually...but if I add a table , but I forget to add that table to my DELETE operation...then I have a row that links to a deleted, non-existing User.

    Read the article

  • mailing system DB structure, need help

    - by Anna
    i have a system there user(sender) can write a note to friends(receivers), number of receivers=0. Text of the message is saved in DB and visible to sender and all receivers then they login to system. Sender can add more receivers at any time. More over any of receivers can edit the message and even remove it from DB. For this system i created 3 tables, shortly: users(userID, username, password) messages(messageID, text) list(id, senderID, receiverID, messageID) in table "list" each row corresponds to pair sender-receiver, like sender_x_ID -- receiver_1_ID -- message_1_ID sender_x_ID -- receiver_2_ID -- message_1_ID sender_x_ID -- receiver_3_ID -- message_1_ID Now the problem is: 1. if user deletes the message from table "messages" how to automatically delete all rows from table "list" which correspond to deleted message. Do i have to include some foreign keys? More important: 2. if sender has let say 3 receivers for his message1 (username1, username2 and username3) and at certain moment decides to add username4 and username5 and at the same time exclude username1 from the list of receivers. PHP code will get the new list of receivers (username2, username3, username4, username5) That means insert to table "list" sender_x_ID -- receiver_4_ID -- message_1_ID sender_x_ID -- receiver_5_ID -- message_1_ID and also delete from table "list" the row corresponding to user1 (which is not in the list or receivers any more) sender_x_ID -- receiver_1_ID -- message_1_ID which sql query to send from PHP to make it in an easy and intelligent way? Please help! Examples of sql queries would be perfect!

    Read the article

  • manipulating 15+ million records in mysql with php?

    - by Nithish
    Hey, I got a user table containing 15+ million records and while doing the registration function i wish to check whether the username already exist. I did indexing for username column and when i run the query "select count(uid) from users where username='webdev'" ,. hmmm, its keep on loading blank screen finally hanged up. I'm doing this in my localhost with php 5 & mysql 5. So suggest me some technique to handle this situation. Is that mongodb is good alternative for handling this process in our local machine? Thanks, Nithish.

    Read the article

  • Prevent duplicate rows with all non-unique columns (only with MySQL)?

    - by letseatfood
    Ho do I prevent a duplicate row from being created in a table with two columns, neither of which are unique? And can this be done using MySQL only, or does it require checks with my PHP script? Here is the CREATE query for the table in question (two other tables exist, users and roles): CREATE TABLE users_roles ( user_id INT(100) NOT NULL, role_id INT(100) NOT NULL, FOREIGN KEY (user_id) REFERENCES users(user_id), FOREIGN KEY (role_id) REFERENCES roles(role_id) ) ENGINE = INNODB; I would like the following query, if executed more than once, to throw an error: INSERT INTO users_roles (user_id, role_id) VALUES (1, 2); Please do not recommend bitmasks as an answer. Thanks!

    Read the article

  • How would I associate a "Note" class to 4+ classes without creating lookup table for each associatio

    - by Gthompson83
    Im creating a project tasklist application. I have project, section, task, issue classes, and would like to use one class to be able to add simple notes to any object instance of those classes. The task, issue tables both use a standard identity field as a primary key. The section table has a two field primary key. The project table has a single int primary key defined by the user. Is there a way to associate the note class with each of these without using a seperate lookup table for each class? I'm not so sure my original idea is a decent way to implement this. I considered the following (each variable mapping to a field n the notes table. Private _NoteId As Integer Private _ProjectId As Integer Private _SectionId As Integer Private _SectionId2 As Integer Private _TaskId As Integer Private _IssueId As Integer Private _Note As String Private _UserId As Guid Then I would be able to write seperate methods (getProjectNotes, getTaskNotes) to get notes attached to each class. I started writing this a few weeks ago but got pulled away before i could finish. When revisiting this code today my first thought "this is retarded". Thoughts on drawbacks to this design?

    Read the article

  • DB management for Heroku apps

    - by zetarun
    Hi all, I'm fairly new to both Rails and Heroku but I'm seriously thinking of using it as a platform to deploy my Ruby/Rails applications. I want to use all the power of Heroku, so I prefer the "embedded" PostgreSQL managed by Heroku instead of the addon for Amazon RDS for MySQL, but I'm not so confident without the possibility to access my data in a SQL client... I know that in a well made app you have no need to access DB, but there are some situations (add rows to a config table, see data not mapped in a view, update some columns for debugging issues, performance monitoring, running queries for reporting, etc.) when this can be good... How do you solve this problem? What's you experience in a real life app powered by Heroku? Thanks!

    Read the article

  • Selecting a sequence NEXTVAL for multiple rows

    - by stringpoet
    I am building a SQL Server job to pull data from SQL Server into an Oracle database through a linked server. The table I need to populate has a sequence for the name ID, which is my primary key. I'm having trouble figuring out a way to do this simply, without some lengthy code. Here's what I have so far for the SELECT portion (some actual names obfuscated): SELECT (SELECT NEXTVAL FROM OPENQUERY(MYSERVER, 'SELECT ORCL.NAME_SEQNO.NEXTVAL FROM DUAL')), psn.BirthDate, psn.FirstName, psn.MiddleName, psn.LastName, c.REGION_CODE FROM Person psn LEFT JOIN MYSERVER..ORCL.COUNTRY c ON c.COUNTRY_CODE = psn.Country MYSERVER is the linked Oracle server, ORCL is obviously the schema. Person is a local table on the SQL Server database where the query is being executed. When I run this query, I get the same exact value for all records for the NEXTVAL. What I need is for it to generate a new value for each returned record. I found this similar question, with its answers, but am unsure how to apply it to my case (if even possible): Query several NEXTVAL from sequence in one satement

    Read the article

  • Create new table with Wordpress API

    - by Fire G
    I'm trying to create a new plugin to track popular posts based on views and I have everything done and ready to go, but I can't seem to create a new table using the Wordpress API (I can do it with standard PHP or with phpMyAdmin, but I want this plugin to be self-sufficient). I've tried several ways ($wpdb-query, $wpdb-get_results, dbDelta) but none of them will create the new table. function create_table(){ global $wpdb; $tablename = $wpdb->prefix.'popular_by_views'; $ppbv_table = $wpdb->get_results("SHOW TABLES LIKE '".$tablename."'" , ARRAY_N); if(is_null($ppbv_table)){ $create_table_sql = "CREATE TABLE '".$tablename."' ( 'id' BIGINT(50) NOT NULL AUTO_INCREMENT, 'url' VARCHAR(255) NOT NULL, 'views' BIGINT(50) NOT NULL, PRIMARY KEY ('id'), UNIQUE ('id') );"; $wpdb->show_errors(); $wpdb->flush(); if(is_null($wpdb->get_results("SHOW TABLES LIKE '".$tablename."'" , ARRAY_N))) echo 'crap, the SQL failed.'; } else echo 'table already exists, nothing left to do.';}

    Read the article

  • How to use multiple identity numbers in one table?

    - by vincer
    I have an web application that creates printable forms, these forms have a unique number on them, the problem is I have 2 forms that separate numbers need to be created for them. ie) Form1- Numbered 2000000-2999999 Form2- Numbered 3000000-3999999 dbo.test2 - is my form information table Tsel - is my autoinc table for the 3000000 series numbers Tadv - is my autoinc table for the 2000000 series numbers What I have done is create 2 tables with just autoinc row (one for 2000000 series numbers and one for 3000000 series numbers), I then created a trigger to add a record to the coresponding table, read back the autoinc number and add it to my table that stores the form information including the just created autoinc number for the right series of forms. Although it does work, I'm concerned that the numbers will get messed up under load. I'm not sure the @@IDENTITY will always return the right value when many people are using the system. (I cannot have duplicates and I need to use the numbering form show above. See code below. ** TRIGGER ** CREATE TRIGGER MAKEANID2 ON dbo.test2 AFTER INSERT AS SET NOCOUNT ON declare @someid int declare @someid2 int declare @startfrom int declare @test1 varchar(10) select @someid=@@IDENTITY select @test1 = (Select name1 from test2 where sysid = @someid ) if @test1 = 'select' begin insert into Tsel Default values select @someid2 = @@IDENTITY end if @test1 = 'adv' begin insert into Tadv Default values select @someid2 = @@IDENTITY end update test2 set name2=(@someid2) where sysid = @someid SET NOCOUNT OFF

    Read the article

  • drawbacks of storing all ''things' in a central table

    - by naiquevin
    Hi, I am not sure if there is a term to describe this, but I have observed that content management systems store all kinds of data in a single table with their bare minimum properties while the meta data is stored in another table in form of key value pairs. for eg. everything (blog posts, pages, images, events etc) is stored in one table and considered as a post. I understand that this allows for abstraction and easy extensibility we are considering designing our new project this way. It is not exactly a CMS but we plan to keep adding modules to it in stages. Lets say initially there will be only posts and images on which comments can be posted. Later on we might add videos which will also have the commenting feature. what are the drawbacks of this approach ? and will it work for a requirement like ours ? Thanks

    Read the article

  • Processing large recordsets in Rails

    - by japancheese
    Hello, I'm trying to perform a daily operation on a larger than normal dataset (2m+ records). However, Rails seems to take a very long time performing operations on such a dataset. Operations like Dataset.all.each do |data| ... end take a very long time to complete (I assume this is because it can't fit all the items into memory at once, right?). Does anyone have any strategies on how I could handle this situation? I know SQL would probably speed up the process, but I'm looking to use the Rails environment as I can do many more complicated things to the data than I can with just SQL statements.

    Read the article

< Previous Page | 655 656 657 658 659 660 661 662 663 664 665 666  | Next Page >