Search Results

Search found 28297 results on 1132 pages for 'sql azure'.

Page 618/1132 | < Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >

  • PHP Error - Login Script

    - by gamerzfuse
    I am creating a new login script/members directory. I am creating it from scratch without any frameworks (advice on this matter would also be appreciated). The situation: // Look up the username and password in the database $query = "SELECT admin_id, username FROM admin WHERE adminname = '$admin_user' AND password = SHA1('$admin_pass')"; $data = mysqli_query($dbc, $query); if (mysqli_num_rows($data) == 1) { This bit of code keeps giving me an error (the last line in particular): Warning: mysqli_num_rows() expects parameter 1 to be mysqli_result, boolean given in /home8/craighoo/public_html/employees/security/dir_admin.php on line 20 When echoing the query I get: SELECT admin_id, username FROM admin WHERE adminname = 'admin' AND password = SHA1('tera#byte') Thanks in advance!

    Read the article

  • Check For Duplicate Records VS try/catch Unique Key Constraint

    - by Jed
    I have a database table that has a Unique Key constraint defined to avoid duplicate records from occurring. I'm curious if it is bad practice to NOT manually check for duplicate records prior to running an INSERT statement on the table. In other words, should I run a SELECT statement using a WHERE clause that checks for duplicate values of the record that I am about to INSERT. If a record is found, then do not run the INSERT statement, otherwise go ahead and run the INSERT.... OR Just run the INSERT statement and try/catch the exception that may be thrown due to a Unique Key violation. I'm weighing the two perspectives and can't decide which is best- 1. Don't waste a SELECT call to check for duplicates when I can just trap for an exception VS 2. Don't be lazy by implementing ugly try/catch logic VS 3. ???Your thoughts here??? :)

    Read the article

  • LINQ: Create persistable Associations in Code, Without Foreign Key

    - by Alex
    Hello, I know that I can create LINQ Associations without a Foreign Key. The problem is, I've been doing this by adding the [Association] attribute in the DBML file (same as through the designer), which will get erased again after I refresh my database (and reload the entire table structure). I know that there is the MyData.cs file (as part of the DBML) in which I can place my partial extensions etc. to domain objects (to persist even after I refresh the DBML), but I don't know how to create an association there?

    Read the article

  • Nullable Integer ? (working with linq)

    - by nCdy
    I've got exception about convert NULL to Int32. I've got a table from database with nullable tinyint [Column(Storage="_StatType", DbType="tinyint NULL")] public StatType : int { get { _StatType; } } (to get C# code just replace variable's type) and after making linq select def StartLinq = linq <#from lpi in _CfgListParIzm where lpi.ID_ListParIzm==drr1 select (lpi.StatType) #> ; StartLinq.ToArray()[0] can't be readed if that is null :-/ mutable STT : int = 0; try { _=int.TryParse(StartLinq.ToArray()[0].ToString(), out STT); } catch { | _ is Exception => () /* I don't care*/ } upper code is very poor trick :( I wont use it.

    Read the article

  • MySQL index building performance

    - by Christian
    I tried to build an index over a two columns of a 30,000,000 entry database. I canceled the process after ~60hr as it didn't seem to work. For some reason MySQL takes only 22 mb ram instead of using the RAM fully. Is index building an operation that needs no Ram or is there some way to tell MySQL to use more RAM to be faster?

    Read the article

  • Trouble with LINQ databind to GridView and RowDataBound

    - by Michael
    Greetings all, I am working on redesigning my personal Web site using VS 2008 and have chosen to use LINQ to create by data-access layer. Part of my site will be a little app to help manage my budget better. My first LINQ query does successfully execute and display in a GridView, but when I try to use a RowDataBound event to work with the results and refine them a bit, I get the error: The type or namespace name 'var' could not be found (are you missing a using directive or an assembly reference?) This interesting part is, if I just try to put in a var s = "s"; anywhere else in the same file, I get the same error too. If I go to other files in the web project, var s = "s"; compiles fine. Here is the LINQ Query call: public static IQueryable pubGetRecentTransactions(int param_accountid) { clsDataContext db; db = new clsDataContext(); var query = from d in db.tblMoneyTransactions join p in db.tblMoneyTransactions on d.iParentTransID equals p.iTransID into dp from p in dp.DefaultIfEmpty() where d.iAccountID == param_accountid orderby d.dtTransDate descending, d.iTransID ascending select new { d.iTransID, d.dtTransDate, sTransDesc = p != null ? p.sTransDesc : d.sTransDesc, d.sTransMemo, d.mTransAmt, d.iCheckNum, d.iParentTransID, d.iReconciled, d.bIsTransfer }; return query; } protected void Page_Load(object sender, EventArgs e) { if (!this.IsPostBack) { this.prvLoadData(); } } internal void prvLoadData() { prvCtlGridTransactions.DataSource = clsMoneyTransactions.pubGetRecentTransactions(2); prvCtlGridTransactions.DataBind(); } protected void prvCtlGridTransactions_RowDataBound(object sender, GridViewRowEventArgs e) { if (e.Row.RowType == DataControlRowType.DataRow) { var datarow = e.Row.DataItem; var s = "s"; e.Row.Cells[0].Text = datarow.dtTransDate.ToShortDateString(); e.Row.Cells[1].Text = datarow.sTransDesc; e.Row.Cells[2].Text = datarow.mTransAmt.ToString("c"); e.Row.Cells[3].Text = datarow.iReconciled.ToString(); }//end if }//end RowDataBound My googling to date hasn't found a good answer, so I turn it over to this trusted community. I appreciate your time in assisting me.

    Read the article

  • Why index_merge is not used here?

    - by user198729
    Setup: mysql> create table t(a integer unsigned,b integer unsigned); mysql> insert into t(a,b) values (1,2),(1,3),(2,4); mysql> create index i_t_a on t(a); mysql> create index i_t_b on t(b); mysql> explain select * from t where a=1 or b=4; +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ | 1 | SIMPLE | t | ALL | i_t_a,i_t_b | NULL | NULL | NULL | 3 | Using where | +----+-------------+-------+------+---------------+------+---------+------+------+-------------+ Is there something I'm missing?

    Read the article

  • Search select statement

    - by Nana
    I am creating a page which would have different field for the user to search from. e.g. search by: Grade: -dropdownlist1- Student name: -dropdownlist2- Student ID: -dropdownlist3- Lessons: -dropdownlist4- Year: -dropdownlist5- How do I write the select statement for this? Each dropdownlist would need a select statement which would extract out different data from the database. But, I want to write ONE select statement which can dynamically choose the dropdownlist options. Instead of writing many many select statement. Lets say; Grade: -dropdownlist1- ; default value(all) Student name: -dropdownlist2-; default value(all) Student ID: -dropdownlist3-; 0-100 is choosen Lessons: -dropdownlist4-; A-C is choosen Year: -dropdownlist5-; 2009 is choosen

    Read the article

  • Ajax Content Loading(Processing) image or indicator

    - by Arny
    Hi there, in part of my web page, I have couple of asp:image Thumbnails, onclick I use ajax modal popup extender to show the imgae in full size which are working fine, what I need to add is to have a processing image or indicator both in thumbnail and modal popup extender, I also have ajax autocomplete that is working fine, I need to add some indicator or processing image to it as soon as user start typing a word. any idea? Thanks in advance

    Read the article

  • Help to choose NoSQL database for project

    - by potapuff
    There is a table: doc_id(integer)-value(integer) Approximate 100k doc_id and 27?? rows. Majority query on this table - searching documents similar to current document: select 10 documents with maximum of (count common to current document value)/(count ov values in document). Nowadays we use PostgreSQL. Table weight (with index) ~1,5 GB. Average query time ~0.5s. Should I transfer all this to NoSQL base, if so, what?

    Read the article

  • Historical / auditable database

    - by Mark
    Hi all, This question is related to the schema that can be found in one of my other questions here. Basically in my database I store users, locations, sensors amongst other things. All of these things are editable in the system by users, and deletable. However - when an item is edited or deleted I need to store the old data; I need to be able to see what the data was before the change. There are also non-editable items in the database, such as "readings". They are more of a log really. Readings are logged against sensors, because its the reading for a particular sensor. If I generate a report of readings, I need to be able to see what the attributes for a location or sensor was at the time of the reading. Basically I should be able to reconstruct the data for any point in time. Now, I've done this before and got it working well by adding the following columns to each editable table: valid_from valid_to edited_by If valid_to = 9999-12-31 23:59:59 then that's the current record. If valid_to equals valid_from, then the record is deleted. However, I was never happy with the triggers I needed to use to enforce foreign key consistency. I can possibly avoid triggers by using the extension to the "PostgreSQL" database. This provides a column type called "period" which allows you to store a period of time between two dates, and then allows you to do CHECK constraints to prevent overlapping periods. That might be an answer. I am wondering though if there is another way. I've seen people mention using special historical tables, but I don't really like the thought of maintainling 2 tables for almost every 1 table (though it still might be a possibility). Maybe I could cut down my initial implementation to not bother checking the consistency of records that aren't "current" - i.e. only bother to check constraints on records where the valid_to is 9999-12-31 23:59:59. Afterall, the people who use historical tables do not seem to have constraint checks on those tables (for the same reason, you'd need triggers). Does anyone have any thoughts about this? PS - the title also mentions auditable database. In the previous system I mentioned, there is always the edited_by field. This allowed all changes to be tracked so we could always see who changed a record. Not sure how much difference that might make. Thanks.

    Read the article

  • Subquery max sequence number

    - by Andy Levesque
    I'm hesitant to ask because I'm sure it's out there, but I just can't seem to come up with the keywords to find the answer. I'm stepping outside my boundaries by starting with subqueries (normally an Access user). I have a query that has TECH_ID, SEQ_NBR, and PELL_FT_AWD_AMT SELECT ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR FROM ISRS_V_NEED_ANAL_RESULT_PARENT GROUP BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID, ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR, ISRS_V_NEED_ANAL_RESULT_PARENT.PELL_FT_AWD_AMT, ISRS_V_NEED_ANAL_RESULT_PARENT.SEQ_NBR HAVING (((ISRS_V_NEED_ANAL_RESULT_PARENT.AWD_YR)="2013")) ORDER BY ISRS_V_NEED_ANAL_RESULT_PARENT.TECH_ID; What I want to return is add a subquery that selects only the max SEQ_NUM for each record, but I can't seem to get the syntax right. In the past I would cheat and have a separate query that first gave me the TECH_ID and max SEQ_NUM, and then have a second query that use the original table and the first query in a join to get the rest. How can I do this in one query? Example: TECH_ID SEQ_NUM PELL 1 1 4000 1 2 4000 1 3 5000 Using just the max of the sequence number still returns: 1; 2; 4000 and 1; 3; 5000 when I'm only wanting the latter.

    Read the article

  • Bus Timetable database design

    - by paddydub
    hi, I'm trying to design a db to store the timetable for 300 different bus routes, Each route has a different number of stops and different times for Monday-Friday, Saturday and Sunday. I've represented the bus departure times for each route as follows, I'm not sure if i should have null values in the table, does this look ok? route,Num,Day, t1, t2, t3, t4 t5 t6 t7 t8 t9 t10 117, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 null null null 117, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 null null null 117, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 null null null 117, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 null null null . . . 117, 20, Monday, 9:39, 10.09, 11:39, 12:39, 14:39 18:39 19:39 null null null 119, 1, Monday, 9:00, 9:30, 10:50, 12:00, 14:00 18:00 19:00 20:00 21:00 22:00 119, 2, Monday, 9:03, 9:33, 10:53, 12:03, 14:03 18:03 19:03 20:03 21:03 22:03 119, 3, Monday, 9:06, 9:36, 10:56, 12:06, 14:06 18:06 19:06 20:06 21:06 22:06 119, 4, Monday, 9:09, 9:39, 10:59, 12:09, 14:09 18:09 19:09 20:09 21:09 22:09 . . . 119, 37, Monday, 9:49, 9:59, 11:59, 12:59, 14:59 18:59 19:59 20:59 21:59 22:59 139, 1, Sunday, 9:00, 9:30, 20:00 21:00 22:00 null null null null null 139, 2, Sunday, 9:03, 9:33, 20:03 21:03 22:03 null null null null null 139, 3, Sunday, 9:06, 9:36, 20:06 21:06 22:06 null null null null null 139, 4, Sunday, 9:09, 9:39, 20:09 21:09 22:09 null null null null null . . . 139, 20, Sunday, 9:49, 9:59, 20:59 21:59 22:59 null null null null null

    Read the article

  • I need some help optimizing my database schema

    - by Steffan
    Here's a layout of my data: Heading 1: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 2: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 3: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 4: Sub heading Sub heading Sub heading Sub heading Sub heading Heading 5: Sub heading Sub heading Sub heading Sub heading Sub heading These headings need to have a 'Completion Status' boolean value which gets linked to a user Id. Currently, this is how my table looks: id | userID | field_1 | field_2 | field_3 | field_4 | etc... ----------------------------------------------------------------------- 1 | 1 | 0 | 0 | 1 | 0 | ----------------------------------------------------------------------- 2 | 2 | 1 | 0 | 1 | 1 | Each field represents one Sub Heading. Having this many columns in my table looks awfully inefficient... How can I go about optimizing this? I can't think of any way to neaten it up :/

    Read the article

  • Querying tables based on other column values

    - by blcArmadillo
    Is there a way to query different databases based on the value of a column in the query? Say for example you have the following columns: id part_id attr_id attr_value_ext attr_value_int You then run a query and if the attr_id is '1' is returns the attr_value_int column but if attr_id is greater than '1' it joins data from another table based on the attr_value_ext.

    Read the article

  • Applying Domain Model on top of Linq2Sql entities

    - by Thomas
    I am trying to practice the model first approach and I am putting together a domain model. My requirement is pretty simple: UserSession can have multiple ShoppingCartItems. I should start off by saying that I am going to apply the domain model interfaces to Linq2Sql generated entities (using partial classes). My requirement translates into three database tables (UserSession, Product, ShoppingCartItem where ProductId and UserSessionId are foreign keys in the ShoppingCartItem table). Linq2Sql generates these entities for me. I know I shouldn't even be dealing with the database at this point but I think it is important to mention. The aggregate root is UserSession as a ShoppingCartItem can not exist without a UserSession but I am unclear on the rest. What about Product? It is defiently an entity but should it be associated to ShoppingCartItem? Here are a few suggestion (they might all be incorrect implementations): public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public int ProductId { get; set; } } Another one would be: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItem> ShoppingCartItems{ get; set; } } public interface IShoppingCartItem { public Guid UserSessionId { get; set; } public IProduct Product { get; set; } } A third one is: public interface IUserSession { public Guid Id { get; set; } public IList<IShoppingCartItemColletion> ShoppingCartItems{ get; set; } } public interface IShoppingCartItemColletion { public IUserSession UserSession { get; set; } public IProduct Product { get; set; } } public interface IProduct { public int ProductId { get; set; } } I have a feeling my mind is too tightly coupled with database models and tables which is making this hard to grasp. Anyone care to decouple?

    Read the article

  • Count times ID appears in a table and return in row.

    - by Tyler
    SELECT Boats.id, Boats.date, Boats.section, Boats.raft, river_company.company, river_section.section AS river FROM Boats INNER JOIN river_company ON Boats.raft = river_company.id INNER JOIN river_section ON Boats.section = river_section.id ORDER BY Boats.date DESC, river, river_company.company Returns everything I need. But how would I add a [Photos] table and count how many times Boats.id occurs in it and add that to the returned rows. So if there are 5 photos for boat #17 I want the record for boat #17 to say PhotoCount = 5

    Read the article

  • how to 'scale' these three tables?

    - by iddqd
    I have the following Tables: Players id playerName Weapons id type otherData Weapons2Player id playersID_reference weaponsID_reference That was nice and simple. Now I need to SELECT items from the Weapons table, according to some of their characteristics that i previously just packed into the otherData column (since it was only needed on the client side). The problem is, that the types have varying characteristics - but also a lot of similar data. So I'm trying to decide on the following possibilities, all of which have their pros and cons. Solution A Kill the Weapons table, and create a new table for each Weapon-Type: Weapons_Swords id bladeType damage otherData Weapons_Guns id accuracy damage ammoType otherData But how will i Link these to the Players ? create Weapons_Swords2Players, Weapons_Guns2Players for each weapon-type? (Will result in a lot more JOINS when loading the player with all his weapons...and it's also more complicated to insert a new player) or add another column to Weapons2Players called WeaponsTypeTable, then do sub-selects to the correct Weapons sub-table (seems easier, but not really right, slightly easier insert i guess) Solution B Keep the Weapons table, and add all the fields i need to it. The Problem is that then there will be NULL fields, since not all Weapon-Types use all fields (can't be right) Weapons id type accuracy damage ammoType bladeType otherData This seems to be pretty basic stuff, but i just can't decide what's best. Or is there a correct Solution C? many thanks.

    Read the article

  • Sqlite View : Add a column based on some other column

    - by NightCoder
    Hi, I have two tables Employee ID | Name | Department ---------------------- 121 |Name1 | dep1 223 |Name2 | dep2 Assignment ID | EID| --------- 1 |121 2 |223 3 |121 [other columns omitted for brevity] The table assignment indicates which is work is assigned to whom.EID is a foriegn key to the table Employee.Also it is possible to have two work assigned to the same employee. Now i want to create a view like this EID | Assigned -------------- 121 |true 333 |false Assigned column should be calculated based on the entries in the Assignment table. So far i am only successful in creating a view like this EID | Assigned -------------- 121 |2 333 |0 using the command CREATE VIEW "AssignmentView" AS SELECT distinct ID ,(select Count(*) from Assignment where Assignment.EID = Employee.ID) as Assigned FROM Employee; Thanks

    Read the article

  • filtering for multiple values on one column. All values must exist, else - return zero

    - by Andrew
    Hello All, I would like to filter one column in a table for couple values and show results only if all those values are there. If one or more is missing, then return zero results. example table +----+--------+----------+ | id | Fruit | Color | +----+--------+----------+ | 1 | apple | red | | 2 | mango | yellow | | 3 | banana | yellow | +----+--------+----------+ example "wrong" code: (this must return 3 rows) select Fruit FROM table WHERE Color = red AND Color = yellow but select Fruit FROM table WHERE Color = red AND Color = green must return 0 rows. (If i use select Fruit FROM table WHERE Color = red OR Color = green i get 1 row which is not what i need) I am using PHP with form where user checks different checkboxes that represent different values of the same column. So when he selects multiple checkboxes, all those values should be in the result set, otherwise no result should be given. Thank you, Andrew

    Read the article

  • Representing Sparse Data in PostgreSQL

    - by Chris S
    What's the best way to represent a sparse data matrix in PostgreSQL? The two obvious methods I see are: Store data in a single a table with a separate column for every conceivable feature (potentially millions), but with a default value of NULL for unused features. This is conceptually very simple, but I know that with most RDMS implementations, that this is typically very inefficient, since the NULL values ususually takes up some space. However, I read an article (can't find its link unfortunately) that claimed PG doesn't take up data for NULL values, making it better suited for storing sparse data. Create separate "row" and "column" tables, as well as an intermediate table to link them and store the value for the column at that row. I believe this is the more traditional RDMS solution, but there's more complexity and overhead associated with it. I also found PostgreDynamic, which claims to better support sparse data, but I don't want to switch my entire database server to a PG fork just for this feature. Are there any other solutions? Which one should I use?

    Read the article

< Previous Page | 614 615 616 617 618 619 620 621 622 623 624 625  | Next Page >