Search Results

Search found 33029 results on 1322 pages for 'database queries'.

Page 347/1322 | < Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >

  • Option Value Changed - ODBC Error 2169

    - by fredrick-ughimi
    Hello, I am connecting to ADS through ODBC DSN. Everything works well until I tried out my Save Routine. Data is saved quite alright but I get an error that says - "Option value changed". What could this be? Can't find in the help file. The full error message is: ODBC - 2169 [iAnywhere Solutions][Advantage ODBC Driver] Option Value Changed. I sent an email in respect of this to [email protected] since this error number falls between error numbers: 2168 - 2188 Internal Error that should be reported. I posted this question on the Newsgroup without a possitive response. Best regards, Fredrick Ughimi [email protected]

    Read the article

  • Mysql - Join matches and non-matches

    - by jwzk
    This is related to my other question: http://stackoverflow.com/questions/2579249/managing-foreign-keys I am trying to join the table of matches and non-matches. So I have a list of interests, a list of users, and a list of user interests. I want the query to return all interests, whether the user has the interest or not (should be null in that case), only where the user = x. Every time I get the query working its only matching interests that the user specifically has, instead of all interests whether they have it or not.

    Read the article

  • What is a good OO C++ wrapper for sqlite

    - by Foo42
    I'd like to find a good object oriented C++ (as opposed to C) wrapper for sqlite. What do people recommend? If you have several suggestions please put them in separate replies for voting purposes. Also, please indicate whether you have any experience of the wrapper you are suggesting and how you found it to use.

    Read the article

  • Copying an entire table with Postgres

    - by NudeCanalTroll
    Hello, I'm trying to copy the contents of one table into another in Postgres, however it appears some rows aren't being copied correctly: ActiveRecord::StatementInvalid: PGError: ERROR: column "email_date" is of type timestamp without time zone but expression is of type character varying HINT: You will need to rewrite or cast the expression. Is there any way I can have it automatically skip (or ignore) invalid rows? Here's the query I'm using: SET statement_timeout = 0; INSERT INTO emails3 SELECT * FROM emails

    Read the article

  • Handling missing data

    - by soppotare
    Say I have a simple helpdesk application which logs calls made by users. I would typically have such fields in a table relating to the call e.g. CallID, Description, CustomerID etc. I Would also have a table of customers including CustomerID, Username, Password, FullName etc. Now when a user is deleted from the customers table then the inner join between the calls table and the users table to find out historically which user logged a call would produce no results. How do people usually deal with this? Have seperate customer and useraccount tables Just disable the accounts so the data is still available Record the customers name in the calls table as a seperate field. or any other methods / suggestions?

    Read the article

  • Display data from table in Ms.access into text box C#

    - by Sophorn
    I have a problem to ask you. I have a table in Ms.Access that contain: (FoodID, FoodName, Price) and in C# I have three text boxes (txtId, txtName, txtPrice) and a button (btnSearch). My question is that, In C# I just type FoodID in (txtId) and then click on button Search It'll display FoodName and Price ( from table access) in txtName and txtPrice by itself. What is the source code for this point? Please write the source code detail.(Please send it to my e-mail: [email protected]) I am looking forward to getting answer from you. Thanks you.

    Read the article

  • SQl server 2008 permission and encryption

    - by paranjai
    i have made columns in some of the tables encrypted in sql server 2008. Now as i am a db owner i have the access to encode and decode the data using the symmetric key and certificate. But some other users have only currently datareader and datawriter rights ,and when they execute any SP referring the logic which uses the key and certificate "User does has not right on the certificate to execute". What rights / exact permission should i grant them just to solve this problem

    Read the article

  • what is the output of this code?

    - by user329820
    Hi,I have wriiten a part of code for you and I want to know the output ,I need your help because there is not any body for helping me also I think that the out put is A ,is this correct? thanks. declare @v1 varchar(20),@v2 varchar(20) select @v1 = 'NULL' if @v1 is null and @v2 is null select 'A' else select 'B'

    Read the article

  • Rails using plural table names even though I told it to use singular

    - by Jason Swett
    I tried to run rake test:profile and I got this error: ... Table 'mcif2.accounts' doesn't exist: DELETE FROM `accounts` I know accounts doesn't exist. It's called account. I know Rails uses plural table names by default but here's what my config/environment.rb looks like: # Load the rails application require File.expand_path('../application', __FILE__) # Initialize the rails application McifRails::Application.initialize! ActiveRecord::Base.pluralize_table_names = false And here's what db/schema.rb looks like: ActiveRecord::Schema.define(:version => 0) do create_table "account", :force => true do |t| t.integer "customer_id", :limit => 8, :null => false t.string "account_number", :null => false t.integer "account_type_id", :limit => 8 t.date "open_date", :null => false So I don't understand why Rails still wants to call it accounts sometimes. Any ideas? If it helps give any clues at all, here are the results of grep -ir 'accounts' *.

    Read the article

  • nginx_http_push_module and databases

    - by rui7905
    i am a newbie of nginx , and i am using nginx as a comet server by nginx_http_push_module i have two question: 1,how can i save the messages which recieved by nginx_http_push_module into databases ? 2,how can i get listeners list of a channel ? thanks~

    Read the article

  • What sort of schema can I use to accommodate manual date based data entries?

    - by meder
    I have an admin where users from multiple properties can enter in monthly statistics for twitter/facebook followers. We do not have access to the real data/db so this is why a manual entry. The form looks like this: Type ( radio, select **one** only ): - Twitter - Facebook Followers/Fans ( textfield ): Property (dropdown): Hotel A, Hotel B Date Start: mm/dd/yyyy (textfield) Date End: mm/dd/yyyy (textfield) Question 1.1: Since I am only keeping track of month per month, the date start/end fields which I have already created might be too specific. Would it be a better idea just to have a start month/year and and month/year if that's the only thing I care about? Question 1.2: What schema could I use for month to month statistics if I were to change the date start and end textfields to start month/year and end month/year dropdowns?

    Read the article

  • MySQL UPDATE WHERE IN for each listed value separately?

    - by Tom
    Hi, I've got the following type of SQL: UPDATE photo AS f LEFT JOIN car AS c ON f.car_id=c.car_id SET f.photo_status=1 , c.photo_count=c.photo_count+1 WHERE f.photo_id IN ($ids) Basically, two tables (car & photo) are related. The list in $ids contains unique photo ids, such as (34, 87, 98, 12). With the query, I'm setting the status of each photo in that list to "1" in the photo table and simultaneously incrementing the photo count in the car table for the car at hand. It works but there's one snag: Because the list can contain multiple photo ids that relate to the same car, the photo count only ever gets incremented once. If the list had 10 photos associated with the same car, photo_count would become 1 .... whereas I'd like to increment it to 10. Is there a way to make the incrementation occur for each photo individually through the join, as opposed to MySQL overthinking it for me? I hope the above makes sense. Thanks.

    Read the article

  • How to optimize this MYSQL table?

    - by Lost_in_code
    This is for an upcoming project. I have two tables - first one keeps tracks of photos, and the second one keeps track of the photo's rank Photos: +-------+-----------+------------------+ | id | photo | current_rank | +-------+-----------+------------------+ | 1 | apple | 5 | | 2 | orange | 9 | +-------+-----------+------------------+ The photo rank keeps changing on a regular basis and this is the table that tracks it: Ranks: +-------+-----------+----------+-------------+ | id | photo_id | ranks | timestamp | +-------+-----------+----------+-------------+ | 1 | 1 | 8 | * | | 2 | 2 | 2 | * | | 3 | 1 | 3 | * | | 4 | 1 | 7 | * | | 5 | 1 | 5 | * | | 6 | 2 | 9 | * | +-------+-----------+----------+-------------+ * = current timestamp Every rank is tracked for reporting/analysis purpose. I talked to someone who has experience in this field and he told me that storing ranks like above is the way to go. But I'm not so sure yet. The problem here is data redundancy. There are going to be tens of thousands of photos. The photo rank changes on a hourly basis (many time within minutes) for recent photos but less frequently for older photos. At this rate the table will have millions of records within months. And since I do not have experience in working with large databases, this makes me a little nervous. I thought of this: Ranks: +-------+-----------+--------------------+ | id | photo_id | ranks | +-------+-----------+--------------------+ | 1 | 1 | 8:*,3:*,7:*,5:* | | 2 | 2 | 2:*,9:* | +-------+-----------+--------------------+ * = current timestamp That means some extra code in PHP to split the rank/time (and sorting) but that looks OK to me. Is this a correct way to optimize the table for performance? What would you recommend? Any suggestions would be great.

    Read the article

< Previous Page | 343 344 345 346 347 348 349 350 351 352 353 354  | Next Page >