Search Results

Search found 17240 results on 690 pages for 'query'.

Page 541/690 | < Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >

  • SQLITE basic syntax

    - by Doori Bar
    I seem to misunderstand a basic syntax, why this sample works: sqlite3_prepare_v2(db, "insert into test values('boo','boo',0);", strlen(querystring)+1 , &stmt, NULL); if ((rc = sqlite3_step(stmt)) != SQLITE_DONE) fprintf(stderr, "Error: sqlite3_step() %d. Error Message %s;\n",rc,sqlite3_errmsg(db)); But when I try this query: "insert into test(strtest) values('boo');" I get an error: Error: sqlite3_step() 19. Error Message constraint failed; What am I missing? table test is: "create table test (blobtest BLOB(4) NOT NULL, strtest VARCHAR NOT NULL, inttest INTEGER NOT NULL );" Thanks, Doori Bar

    Read the article

  • How to create a view of table that contains a timestamp column?

    - by Matt Faus
    This question is an extension of a previous one I have asked. I have a table (2014_05_31_transformed.Video) with a schema that looks like this. I have put up the JSON returned by the BigQuery API describing it's schema in this gist. I am trying to create a view against this table with an API call that looks like this: { 'view': { 'query': u 'SELECT deleted_mod_time FROM [2014_05_31_transformed.Video]' }, 'tableReference': { 'datasetId': 'latest_transformed', 'tableId': u 'Video', 'projectId': 'redacted' } } But, the BigQuery API is returning this error: HttpError: https://www.googleapis.com/bigquery/v2/projects/124072386181/datasets/latest_transformed/tables?alt=json returned "Invalid field name "deleted_mod_time.usec". Fields must contain only letters, numbers, and underscores, start with a letter or underscore, and be at most 128 characters long." The schema that the BigQuery API does not make any distinction between a TIMESTAMP data type and a regular nullable INTEGER data type, so I can't think of a way to programmatically correct this problem. Is there anything I can do, or is this a bug with BigQuery's view implementation?

    Read the article

  • Get a specific entry by group in SQL

    - by Jensen
    Hi, I've a database who contain some datas in that form: icon(name, size, tag) (myicon.png, 16, 'twitter') (myicon.png, 32, 'twitter') (myicon.png, 128, 'twitter') (myicon.png, 256, 'twitter') (anothericon.png, 32, 'facebook') (anothericon.png, 128, 'facebook') (anothericon.png, 256, 'facebook') So as you see it, the name field is not uniq I can have multiple icons with the same name and they are separated with the size field. Now in PHP I have a query that get ONE icon set, for example : mysql_query("SELECT * FROM icon WHERE tag='".$tag."' ORDER BY size LIMIT 0, 10"); With this example if $tag contain 'twitter' it will show ONLY the first SQL data entry with the tag 'twitter', so it will be : (myicon.png, 16, 'twitter') This is what I want, but I would prefer the '128' size by default. Is this possible to tell SQL to send me only the 128 size when existing and if not another size ? Thanks !

    Read the article

  • Check for Existence of a Result in Linq-to-xml

    - by NateD
    I'm using Linq-to-XML to do a simple "is this user registered" check (no security here, just making a list of registered users for a desktop app). How do I handle the result from a query like this: var people = from person in currentDoc.Descendants("Users") where (string)person.Element("User") == searchBox.Text select person; I understand the most common way to use the result would be something like foreach (var line in people){ //do something here } but what do you do if person comes back empty, which is what would happen if the person isn't registered? I've looked around on this site and on MSDN and haven't found a really clear answer yet. Extra credit: Give a good explanation of what people contains.

    Read the article

  • HSM - cryptoki - opening sessions overhead

    - by Raj
    I am having a query regarding sessions with HSM. I am aware that there is an overhead if you initialise and finalise the cryptoki api for every file you want to encrypt/decrypt. My queries are, Is there an overhead in opening and closing individual sessions for every file, you want to encrypt/decrypt.(C_Initialize/C_Finalize) How many maximum number of sessions can i have for a HSM simultaneously, with out affecting the performance? Is opening and closing the session for processing individual files the best approach or opening a session and processing multiple files and then closing the session the best approach? Thanks

    Read the article

  • Encoding issue with form and HTML Purifier / MySQL

    - by Andrew Heath
    Driving me nuts... Page with form is encoded as Unicode (UTF-8) via: <meta http-equiv="content-type" content="text/html; charset=utf-8"> entry column in database is text utf8_unicode_ci copying text from a Word document with " in it, like this: “1922.” is insta-fail and ends up in the database as â??1922.â?? (typing new data into the form, including " works fine... it's cut and pasting from Word...) PHP steps behind the scenes are: grab value from POST run through HTML Purifier default settings run through mysql_real_escape_string insert query into dbase Help?

    Read the article

  • mysql left outer join

    - by tirso
    hi to all I have two tables employee and timecard, employee table has fields employee_id,firstname,middlename,lastname and timecard table has fields employee_id,time-in,time-out,tc_date_transaction. I want to select all employee records which have the same employee_id with timecard and date is equal with the current date. If there are no records equal with the current date then return also the records of employee even without time-in,timeout and tc_date_transaction. I have query like this SELECT * FROM employee LEFT OUTER JOIN timecard ON employee.employee_id = timecard.employee_id WHERE tc_date_transaction = "17/06/2010"; result should like this: employee_id,firstname, middlename, lastname,time-in,time-out,tc_date_transaction 1,john,t,cruz,08:00,05:00,17/06/2010 2,mary,j,von,null,null,null any help would greatly appreciated Thanks in advance

    Read the article

  • Problem with update sql with excel

    - by phenevo
    Hi, I have a problem with this query: Update Provinces Set Provinces.DefaultName=T2.Defaultname from Provinces inner join OPENROWSET('Microsoft.Jet.OLEDB.4.0', 'Excel 8.0;Database=C:\provinces.xlsx;HDR=YES', 'SELECT Code, Defaultname FROM [Arkusz1$]') T2 On Provinces.Code = t2.Code where Provinces.Code = T2.Code I get error: Msg 7399, Level 16, State 1, Line 1 The OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)" reported an error. The provider did not give any information about the error. Msg 7303, Level 16, State 1, Line 1 Cannot initialize the data source object of OLE DB provider "Microsoft.Jet.OLEDB.4.0" for linked server "(null)". What is a reason of this unpleasent situation ?

    Read the article

  • How do I fix "error 1004, 0, Unable to find property" in an Entity Framework 4 WinForms application?

    - by Ivan
    I've designed an EF4 model (quite complex inheritance, lots of small tables incl. multiple self-referencing), generated (table-per-type) a database and inserted some basic data manually. It works fine in an ASP.Net Dynamic Data Entities web application with full automatic scaffolding. But when in a WinForms application using the same model (I share it as a part of a class library) I construct a query and bind a combo box to it (the way it's shown here), I get an InnerException {"Internal .NET Framework Data Provider error 1004, 0, Unable to find property... I've found a question about the same problem here (incl. a sample to reproduce the error) but no answer. I use final Visual Studio 2010, no beta.

    Read the article

  • Postgres column casting...

    - by Simon
    I have a query SELECT assetid, type_code, version, name, short_name, status, languages, charset, force_secure, created, created_userid, updated, updated_userid, published, published_userid, status_changed, status_changed_userid FROM sq_ast WHERE assetid = 7 which doesn't work and throws ERROR: operator does not exist: character varying = integer LINE 4: FROM sq_ast WHERE assetid = 7 I can get it to work by doing SELECT assetid, type_code, version, name, short_name, status, languages, charset, force_secure, created, created_userid, updated, updated_userid, published, published_userid, status_changed, status_changed_userid FROM sq_ast WHERE assetid = '7' Please note the quoting of the 7 in the WHERE clause... I am deploying an huge application and I cannot rewrite the core... similarly I don't want to risk changing the type of the column... I'm no Postgres expert... please help... Is there an option for strict casting of columns???

    Read the article

  • Error loading SQL_VARIANT data type using Python

    - by Brett D
    I am using Python and SQLAlchemy to query a database that I did not create. I have run into a problem querying a table that contains the SQL_VARIANT data type. I get the error: sqlalchemy.exc.DBAPIError: (Error) ('ODBC data type -150 is not supported. Cannot read column Value.', 'HY000') I confirmed with the database creator that the "Value" column is of type SQL_VARIANT. Does anyone know a way to load this data type using Python? I am currently using mssql with pyodbc. Thank you for any help you can offer! Versions: Python 2.7, SQLAlchemy 0.7.8

    Read the article

  • Subquery works in 9i but not in 11g

    - by Zsuetam
    Statement below is working on Oracle 9i but not on Oracle 11g SELECT * FROM ( SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL UNION ALL SELECT 0 scrnfail_rate, '9' zz, 7 hh FROM DUAL ) WHERE zz IS NOT NULL AND TO_CHAR (hh) NOT IN ( SELECT DECODE ( scrnfail_rate, 0, -1, ROUND (LEVEL * 1 / (scrnfail_rate / 100)) - ROUND (1 / (2 * (scrnfail_rate / 100))) ) AS nno FROM DUAL WHERE NVL (scrnfail_rate, 0) > 0 CONNECT BY LEVEL <= ROUND(9 * scrnfail_rate / 100) ) It looks like Oracle 11g is ignoring where decode or even where clause in the subquery. This query should return two rows as it does on Oracle 9i, but results ORA-01476: divisor is equal to zero on Oracle 11g EE 11.2.0.1.0 - 64bit. Can anyone help? Thanks!

    Read the article

  • Android insert into sqlite database

    - by Josh
    I know there is probably a simple thing I'm missing, but I've been beating my head against the wall for the past hour or two. I have a database for the Android application I'm currently working on (Android v1.6) and I just want to insert a single record into a database table. My code looks like the following: //Save information to my table sql = "INSERT INTO table1 (field1, field2, field3) " + "VALUES (" + field_one + ", " + field_two + ")"; Log.v("Test Saving", sql); myDataBase.rawQuery(sql, null); the myDataBase variable is a SQLiteDatabase object that can select data fine from another table in the schema. The saving appears to work fine (no errors in LogCat) but when I copy the database from the device and open it in sqlite browser the new record isn't there. I also tried manually running the query in sqlite browser and that works fine. The table schema for table1 is _id, field1, field2, field3. Any help would be greatly appreciated. Thanks!

    Read the article

  • End User Ad-Hoc Reporting Tool: Microsoft SQL Server Management Studio or Microsoft Access?

    - by schultkl
    Our centralized IT department has suggested two primary ad hoc query tools for our general user base of approximately 200 staff members: Microsoft SQL Server Management Studio 2008 (SSMS) Microsoft Access 2003 Environment The backend database is a read-only Microsoft SQL Server 2005 database. The schema is 400+ tables; allowing access to the raw data for our general staff would be a disaster. We will be building an "abstraction layer" over the raw data for our general staff to run ad hoc queries against. The abstraction layer will most likely contain a number of views. A number of users have basic knowledge in Microsoft Access; none have used SSMS. Which of the above tools (or alternative) would be best for a decidedly non-techie user base of approximately 200 people? What are the pros and cons of each? Also, the IT department has suggested teaching people T-SQL so they may use SSMS. Is this reasonable?

    Read the article

  • Sharepoint FullTextQuery doesnt work

    - by user330309
    I have three scopes: scope A with rule include http: //mywebapp/lists/myList scope B with rule include FileExtenion aspx scope C with rule include http: //mywebapp/mysitecollection2/ some managed properties mapped to columns from myList, ows_contenttype, ows_created (Property1,2,3) select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'A' this returns 15 resullts select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'B' this returns 50 results select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'C' this returns 10 results so why this query select title, url, description, Property1, Property2, Property3 from Scope() where scope = 'A' or scope='B' or scope='C' doesnt return 75 results ?? it returns 15 or some FullTextSqlQuery sqlQuery = new FullTextSqlQuery (site); sqlQuery .ResultTypes = ResultType .RelevantResults ; sqlQuery .QueryText = sql ; sqlQuery .TrimDuplicates = true; sqlQuery.EnableStemming=true; ResultTableCollection results = sqlQuery .Execute();

    Read the article

  • SQLserver multithreaded locking with TABLOCKX

    - by WilfriedVS
    I have a table "tbluser" with 2 fields: userid = integer (autoincrement) user = nvarchar(100) I have a multithreaded/multi server application that uses this table. I want to accomplish the following: Guarantee that field user is unique in my table Guarantee that combination userid/user is unique in each server's memory I have the following stored procedure: CREATE PROCEDURE uniqueuser @user nvarchar(100) AS BEGIN BEGIN TRAN DECLARE @userID int SET nocount ON SET @userID = (SELECT @userID FROM tbluser WITH (TABLOCKX) WHERE [user] = @user) IF @userID <> '' BEGIN SELECT userID = @userID END ELSE BEGIN INSERT INTO tbluser([user]) VALUES (@user) SELECT userID = SCOPE_IDENTITY() END COMMIT TRAN END Basically the application calls the stored procedure and provides a username as parameter. The stored procedure either gets the userid or insert the user if it is a new user. Am I correct to assume that the table is locked (only one server can insert/query)?

    Read the article

  • INSERT INTO ... SELECT FROM ... ON DUPLICATE KEY UPDATE

    - by dnagirl
    I'm doing an insert query where most of many columns would need to be updated to the new values if a unique key already existed. It goes something like this: INSERT INTO lee(exp_id, created_by, location, animal, starttime, endtime, entct, inact, inadur, inadist, smlct, smldur, smldist, larct, lardur, lardist, emptyct, emptydur) SELECT id, uid, t.location, t.animal, t.starttime, t.endtime, t.entct, t.inact, t.inadur, t.inadist, t.smlct, t.smldur, t.smldist, t.larct, t.lardur, t.lardist, t.emptyct, t.emptydur FROM tmp t WHERE uid=x ON DUPLICATE KEY UPDATE ...; //update all fields to values from SELECT, except for exp_id, created_by, location, animal, starttime, endtime I'm not sure what the syntax for the UPDATE clause should be. How do I refer to the current row from the SELECT clause?

    Read the article

  • mysql GROUP_CONCAT

    - by user301766
    I want to list all users with their corropsonding user class. Here are simplified versions of my tables CREATE TABLE users ( user_id INT NOT NULL AUTO_INCREMENT, user_class VARCHAR(100), PRIMARY KEY (user_id) ); INSERT INTO users VALUES (1, '1'), (2, '2'), (3, '1,2'); CREATE TABLE classes ( class_id INT NOT NULL AUTO_INCREMENT, class_name VARCHAR(100), PRIMARY KEY (class_id) ); INSERT INTO classes VALUES (1, 'Class 1'), (2, 'Class 2'); And this is the query statement I am trying to use but is only returning the first matching user class and not a concatenated list as hoped. SELECT user_id, GROUP_CONCAT(DISTINCT class_name SEPARATOR ",") AS class_name FROM users, classes WHERE user_class IN (class_id) GROUP BY user_id; Actual Output +---------+------------+ | user_id | class_name | +---------+------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1 | +---------+------------+ Wanted Output +---------+---------------------+ | user_id | class_name | +---------+---------------------+ | 1 | Class 1 | | 2 | Class 2 | | 3 | Class 1, Class 2 | +---------+---------------------+ Thanks in advance

    Read the article

  • Keeping a large volume of data in Session - Suggestions / alternatives?

    - by Fishcake
    I'm developing a web app for which the client wants us to query their data as little as possible. The data will be coming from a Microsoft CRM instance. So we've agreed that data will only be queried as and when it is needed, therefore if a web user wants to see a list of contacts (for example) that list is fetched into a local DataTable. Then if a new contact is created on the website the new contact is sent to CRM and added to the local DataTable at the same time. Likewise for edits. If the user then looks at their contacts again the data will just come from the local DataTable. At the moment local data is being kept in Session but my concern is that too much memory will start being used up. However traffic is expected to be pretty small, perhaps no more than 20 concurrent users so am I worrying about nothing or is there a better way you can suggest to handle this?

    Read the article

  • Assigning a MVC Controller property from Asp.Net page

    - by JasonMHirst
    I don't know if I've understanding MVC correctly if my question makes no sense, but I'm trying to understand the following: I have some code on a controller that returns JSON data. The JSON data is populated based on a choice from a dropdown box on an Asp.Net page. I thought (incorrectly) that Session variables would be shared between the Asp.Net project and the MVC Project. What I'd like to do therefore (if this is possible), is to call a Sub on the MVC that sets a variable before the JSON query is run. I have the following: Sub SetCountryID(ByVal CountryID As Integer) Me.pCountrySelectedID = CountryID End Sub Which I can call by the following: Response.Write("http://localhost:7970/Home/SetCountryID/?CountryID=44") But this then results in a blank page - again obviouslly totally incorrect! Am I going about MVC the wrong way or do I still have a hell of a lot more learning to do? Is this even possible to do?

    Read the article

  • PL/SQL Package invalidated

    - by FrustratedWithFormsDesigner
    I have a script that makes use of a package (PKG_MY_PACKAGE). I will change some of the fields in a query in that package and then recompile it (I don't change or compile any other packages). I run the script and I get an error that looks like ORA-04068: existing state of packages has been discarded ORA-04061: existing state of package body "USER3.PKG_MY_PACKAGE" has been invalidated ORA-04065: not executed, altered or dropped package body "USER3.PKG_MY_PACKAGE" ORA-06508: PL/SQL: could not find program unit being called: "USER3.PKG_MY_PACKAGE" ORA-06512: at line 34 I run the script again (without changing anything else in the system) and the script executes successfully. I thought that when I compiled before I executed the script that would fix any invalid references. This is 100% reproducible, and the more I test this script the more annoying it gets. What could cause this, and what would fix it? (oracle 10g, using PL/SQL Developer 7)

    Read the article

  • Create a group indicator (SQL)

    - by user1723699
    I am looking to create a group indicator for a query using SQL (Oracle specifically). Basically, I am looking for duplicate entries for certain columns and while I can find those what I also want is some kind of indicator to say what rows the duplicates are from. Below is an example of what I am looking to do (looking for duplicates on Name, Zip, Phone). The rows with Name = aaa are all in the same group, bb are not, and c are. Is there even a way to do this? I was thinking something with OVER (PARTITION BY ... but I can't think of a way to only increment for each group. +----------+---------+-----------+------------+-----------+-----------+ | Name | Zip | Phone | Amount | Duplicate | Group | +----------+---------+-----------+------------+-----------+-----------+ | aaa | 1234 | 5555555 | 500 | X | 1 | | aaa | 1234 | 5555555 | 285 | X | 1 | | bb | 545 | 6666666 | 358 | | 2 | | bb | 686 | 7777777 | 898 | | 3 | | aaa | 1234 | 5555555 | 550 | X | 1 | | c | 5555 | 8888888 | 234 | X | 4 | | c | 5555 | 8888888 | 999 | X | 4 | | c | 5555 | 8888888 | 230 | X | 4 | +----------+---------+-----------+------------+-----------+-----------+

    Read the article

  • Local Data Cache - How do I refresh the local db when I add fields to remote db?

    - by Chu
    I'm using a Local Data Cache in an ASP.NET 3.5 environment. I made a change in my main database by adding a new field. I double click on my .SYNC file in my project to startup the Local Data Cache wizard again. The wizard starts and I click OK with the hopes that it'll re-query my database and add the new field to the local database file. Instead, I get an error saying "Synchronizing the databae failed with the message: Unable to enumerate changes at the DbServerSyncProvider..." The only way I know to get things working again is to delete the .SYNC file along with the local database and start it from scratch. There's got to be an easier way... anyone know it?

    Read the article

  • Entity Framework How to specify paramter type in generated SQL (SQLServer 2005) Nvarchar vs Varchar

    - by Gratzy
    In entity framework I have an Entity 'Client' that was generated from a database. There is a property called 'Account' it is defined in the storage model as: <Property Name="Account" Type="char" Nullable="false" MaxLength="6" /> And in the Conceptual Model as: <Property Name="Account" Type="String" Nullable="false" /> When select statements are generated using a variable for Account i.e. where m.Account == myAccount... Entity Framework generates a paramaterized query with a paramater of type NVarchar(6). The problem is that the column in the table is data type of char(6). When this is executed there is a large performance hit because of the data type difference. Account is an index on the table and instead of using the index I believe an Index scan is done. Anyone know how to force EF to not use Unicode for the paramater and use Varchar(6) instead?

    Read the article

  • Sql - add row when not existed

    - by Nguyen Tuan Linh
    Suppose I have a query that returns result like this: Project Year Type Amt PJ00001 2012 1 1000 PJ00001 2012 2 1000 PJ00001 2011 1 1000 PJ00002 2012 1 1000 What I want: Every Project will have 2 rows of Types for each Year. If the row is not there, add it to the result with Amt = 0. For example: - PJ00001 have 2 rows of type 1,2 in 2012 -- OK. But in 2011, it only have 1 row of Type 1 -- We add one row:PJ00001 2011 2 0 - PJ00002 have only 1 row of type 1 -- add:PJ00002 2012 2 0 Is there a way to easily do it. The only way I know now is to create a view like: PJ_VIEW. And then: SELECT * FROM PJ_VIEW UNION ALL SELECT t.PROJECT, t.YEAR_NO, 1 AS TYPE_NO, 0 AS AMT FROM PJ_VIEW t WHERE NOT EXISTS (SELECT 1 FROM PJ_VIEW t2 WHERE t2.PROJECT = t.PROJECT AND t2.YEAR_NO = t.YEAR_NO AND t2.TYPE_NO = 1) UNION ALL SELECT t.PROJECT, t.YEAR_NO, 2 AS TYPE_NO, 0 AS AMT FROM PJ_VIEW t WHERE NOT EXISTS (SELECT 1 FROM PJ_VIEW t2 WHERE t2.PROJECT = t.PROJECT AND t2.YEAR_NO = t.YEAR_NO AND t2.TYPE_NO = 2)

    Read the article

< Previous Page | 537 538 539 540 541 542 543 544 545 546 547 548  | Next Page >