Search Results

Search found 25640 results on 1026 pages for 'alter table'.

Page 413/1026 | < Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >

  • How to Store and Retrieve Images Using MsSQL (Server Management Studio)

    - by Joe Majewski
    I am having difficulties when trying to insert files into an MsSQL database. I'll try to break this down as best as I can: What data type should I be using to store image files (jpeg/png/gif/etc)? Right now my table is using the image data type, but I am curious if varbinary would be a better option. How would I go about inserting the image into the database? Does Microsoft SQL Server Management Studio have any built in functions that allow insertions of files into tables? If so, how is that done? Also, how could this be done through the use of an HTML form with PHP handling the input data and placing it into the table? How would I fetch the image from the table and display it on the page? I understand how to SELECT the cell's contents, but how would I go about translating that into a picture. Would I have to have a header(Content type: image/jpeg)? I have no problem doing any of these things with MySQL, but the MsSQL environment is still new to me, and I am working on a project for my job that requires the use of stored procedures to grab various data. Any and all help is appreciated. Thank you very much for your responses!

    Read the article

  • mysql subquery strangely slow

    - by aviv
    I have a query to select from another sub-query select. While the two queries look almost the same the second query (in this sample) runs much slower: SELECT user.id ,user.first_name -- user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); This query takes 1.2s Explain on this query results: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user index first_name 152 141192 Using where; Using index 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where The second query: SELECT -- user.id -- user.first_name user.* FROM user WHERE user.id IN (SELECT ref_id FROM education WHERE ref_type='user' AND education.institute_id='58' AND education.institute_type='1' ); Takes 45sec to run, with explain: id select_type table type possible_keys key key_len ref rows Extra 1 PRIMARY user ALL 141192 Using where 2 DEPENDENT SUBQUERY education index_subquery ref_type,ref_id,institute_id,institute_type,ref_type_2 ref_id 4 func 1 Using where Why is it slower if i query only by index fields? Why both queries scans the full length of the user table? Any ideas how to improve? Thanks.

    Read the article

  • Visual Studio + Database Edition + CDC = Deploy Fail

    - by Ben
    Hi All, I've got a database using change data capture (CDC) that is created from a Visual Studio database project (GDR2). My problem is that I have a stored procedure that is analyzing the CDC information and then returning data. How is that a problem you ask? Well, the order of operation is as follows. Pre-deployment Script Tables Indexes, keys, etc. Procedures Post-deployment Script Inside the post-deployment script is where I enable CDC. Here-in lies the problem. The procedure that is acting on the CDC tables is bombing because they don't exist yet! I've tried to put the call to sys.sp_cdc_enable_table in the script that creates the table, but it doesn't like that. Error 102 TSD03070: This statement is not recognized in this context. C:...\Schema Objects\Schemas\dbo\Tables\Foo.table.sql 20 1 Foo Is there a better/built-in way to enable CDC such that it's references are available when the stored procedures are created? Is there a way to run a script after tables are created but before other objects are created? How about a way to create the procedure dependencies be damned? Or maybe I'm just doing things that shouldn't be done?!?! Now, I have a work around. Comment out the sproc body Deploy (CDC is created) Uncomment sproc Deploy Everything is great until the next time I update a CDC tracked table. Then I need to comment out the 'offending' procedure. Thanks for reading my question and thanks for your help!

    Read the article

  • Frequent error in Oracle ORA-04068: existing state of packages has been discarded

    - by martilyo
    Hi, We're getting this error once a day on a script that runs every two hours, but at different times of the day. ERROR at line 1: ORA-04068: existing state of packages has been discarded ORA-04061: existing state of package body "PACKAGE.NAME" has been invalidated ORA-06508: PL/SQL: could not find program unit being called: "PACKAGE.NAME" ORA-06512: at line 1 Could someone list what conditions can cause this error so that we could investigate? Thanks. UPDATE: Would executing 'ALTER SESSION CLOSE DATABASE LINK DBLINK' invalidate a state of the package?

    Read the article

  • [PHP] Kohana-v3 ORM parent relationship

    - by VDVLeon
    Hi all, I just started with the version 3 of the Kohana Framework. I have worked a little with the $_has_many etc. Now I have the table pages. The primary key is pageID. The table has a column called parentPageID. Now I want to make a ORM model who, when accesed like this $page->parent->find() returns the page identified by parentPageID. I have the following already: // Settings protected $_table_name = 'pages'; protected $_primary_key = 'pageID'; protected $_has_one = array( 'parent' => array( 'model' => 'page', 'foreign_key' => 'parentPageID', ), ); But that does not work, it simply returns the first page from the table. Last query says this: SELECT `pages`.* FROM `pages` ORDER BY `pages`.`pageID` ASC LIMIT 1 Does somebody know how to solve this? I know this can: $parent = $page->parent->find($page->parentPageID); but it must be and can be cleaner (I think).

    Read the article

  • CakePHP hasMany relationship with multiple columns

    - by Muhammad Yasir
    Hi, I am using CakePHP framework to build a web application. The simplest form of my problem is this: I have a users table and a messages table with corresponding models. Messages are sent from a user to another user. So messages table has columns from_id and to_id in it, both referencing to id of users. I am able to link Message model to User model by using $belongsTo but I am unable to link User model with Message model (in reverse direction) by using $hasMany in the same manner. var $hasMany = array( 'From' => array( 'className' => 'Message', 'foreignKey' => 'from_id', 'dependent' => false, 'conditions' => '', 'fields' => '', 'order' => '', 'limit' => '', 'offset' => '', 'exclusive' => '', 'finderQuery' => '', 'counterQuery' => '' ), 'To' => array( 'className' => 'Message', 'foreignKey' => 'to_id', 'dependent' => false, 'conditions' => '', 'fields' => '', 'order' => '', 'limit' => '', 'offset' => '', 'exclusive' => '', 'finderQuery' => '', 'counterQuery' => '' ) ); What can do here? Any ideas? Thanks for any help.

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • SQL Query Math Gymnastics

    - by keruilin
    I have two tables of concern here: users and race_weeks. User has many race_weeks, and race_week belongs to User. Therefore, user_id is a fk in the race_weeks table. I need to perform some challenging math on fields in the race_weeks table in order to return users with the most all-time points. Here are the fields that we need to manipulate in the race_weeks table. races_won (int) races_lost (int) races_tied (int) points_won (int, pos or neg) recordable_type(varchar, Robots can race, but we're only concerned about type 'User') Just so that you fully understand the business logic at work here, over the course of a week a user can participate in many races. The race_week record represents the summary results of the user's races for that week. A user is considered active for the week if races_won, races_lost, or races_tied is greater than 0. Otherwise the user is inactive. So here's what we need to do in our query in order to return users with the most points won (actually net_points_won): Calculate each user's net_points_won (not a field in the DB). To calculate net_points, you take (1000 * count_of_active_weeks) - sum(points__won). (Why 1000? Just imagine that every week the user is spotted a 1000 points to compete and enter races. We want to factor-out what we spot the user because the user could enter only one race for the week for 100 points, and be sitting on 900, which we would skew who actually EARNED the most points.) This one is a little convoluted, so let me know if I can clarify further.

    Read the article

  • Improve Log Exceptions

    - by Jaider
    I am planning to use log4net in a new web project. In my experience, I see how big the log table can get, also I notice that errors or exceptions are repeated. For instance, I just query a log table that have more than 132.000 records, and I using distinct and found that only 2.500 records are unique (~2%), the others (~98%) are just duplicates. so, I came up with this idea to improve logging. Having a couple of new columns: counter and updated_dt, that are updated every time try to insert same record. If want to track the user that cause the exception, need to create a user_log or log_user table, to map N-N relationship. Create this model may made the system slow and inefficient trying to compare all these long text... Here the trick, we should also has a hash column of binary of 16 or 32, that hash the message and the exception, and configure an index on it. We can use HASHBYTES to help us. I am not an expert in DB, but I think that will made the faster way to locate a similar record. And because hashing doesn't guarantee uniqueness, will help to locale those similar record much faster and later compare by message or exception directly to make sure that are unique. This is a theoretical/practical solution, but will it work or bring more complexity? what aspects I am leaving out or what other considerations need to have? the trigger will do the job of insert or update, but is the trigger the best way to do it?

    Read the article

  • jquery change event not working with IE6

    - by manivineet
    It is indeed quite unfortunate that my client still uses IE6. using jquery 1.4.2 The problem is that I open a window using a click event and do some edit operation in the new window. I have a 'change' event attached to the row of a table which has input fields. Now when the window loads for the first time and I make a change in the input for the FIRST time, the change event does not fire. however, on a second try it starts working. I have noticed that I e.g. I run a dummy page, i.e. create a new page(i work with visual studio) and run that page individually , the 'change' event works just fine. what it going on? and what can i do, besides going back to 1.3.2 (by the way that doesn't work either, but haven't fully tested it yet) <!--HTML--> <table id="tbReadData"> <tr class="nenDataRow" id="nenDr2"> <td> <input type="text" class="nenMeterRegister" value="1234" /> </td> <tr /> <table> <script type="text/javascript"> $(document).ready(function(){ $('#tbReadData').find('tr').change(function() { alert('this works'); } }); </script>

    Read the article

  • MySQL Ratings From Two Tables

    - by DirtyBirdNJ
    I am using MySQL and PHP to build a data layer for a flash game. Retrieving lists of levels is pretty easy, but I've hit a roadblock in trying to fetch the level's average rating along with it's pointer information. Here is an example data set: levels Table: level_id | level_name 1 | Some Level 2 | Second Level 3 | Third Level ratings Table: rating_id | level_id | rating_value 1 | 1 | 3 2 | 1 | 4 3 | 1 | 1 4 | 2 | 3 5 | 2 | 4 6 | 2 | 1 7 | 3 | 3 8 | 3 | 4 9 | 3 | 1 I know this requires a join, but I cannot figure out how to get the average rating value based on the level_id when I request a list of levels. This is what I'm trying to do: SELECT levels.level_id, AVG(ratings.level_rating WHERE levels.level_id = ratings.level_id) FROM levels I know my SQL is flawed there, but I can't figure out how to get this concept across. The only thing I can get to work is returning a single average from the entire ratings table, which is not very useful. Ideal Output from the above conceptually valid but syntactically awry query would be: level_id | level_rating 1| 3.34 2| 1.00 3| 4.54 My main issue is I can't figure out how to use the level_id of each response row before the query has been returned. It's like I want to use a placeholder... or an alias... I really don't know and it's very frustrating. The solution I have in place now is an EPIC band-aid and will only cause me problems long term... please help!

    Read the article

  • Using complex where clause in NHibernate mapping layer

    - by JLevett
    I've used where clauses previously in the mapping layer to prevent certain records from ever getting into my application at the lowest level possible. (Mainly to prevent having to re-write lots of lines of code to filter out the unwanted records) These have been simple, one column queries, like so this.Where("Invisible = 0"); However a scenario has appeared which requires the use of an exists sql query. exists (select ep_.Id from [Warehouse].[dbo].EventPart ep_ where Id = ep_.EventId and ep_.DataType = 4 In the above case I would usually reference the parent table Event with a short name, i.e. event_.Id however as Nhibernate generates these short names dynamically it's impossible to know what it's going to be. So instead I tried using just Id, from above ep_ where Id = ep_.EventId When the code is run, because of the dynamic short names the EventPart table short name ep_ is has another short name prefixed to it, event0_.ep_ where event0_ refers to the parent table. This causes an SQL error because of the . in between event0_ and ep_ So in my EventMap I have the following this.Where("(exists (select ep_.Id from [isnapshot.Warehouse].[dbo].EventPart ep_ where Id = ep_.EventId and ep_.DataType = 4)"); but when it's generated it creates this select cast(count(*) as INT) as col_0_0_ from [isnapshot.Warehouse].[dbo].Event event0_ where (exists (select ep_.Id from [isnapshot.Warehouse].[dbo].EventPart event0_.ep_ where event0_.Id = ep_.EventId and ep_.DataType = 4) It has correctly added the event0_ to the Id Was the mapping layer where clause built to handle this and if so where am I going wrong?

    Read the article

  • How to insert an element between the two elements dynamically?

    - by Harish
    I am using a table, in which there are buttons, on button click i want the new TR element to be inserted between the two TR or at the end of the TR... my code goes here <table> <tbody> <tr> <td> <input type="submit" value="Add" onclick="addFunction()" /> </td> </tr> <tr> <td> <input type="submit" value="Add" onclick="addFunction()" /> </td> </tr> <tr> <td> <input type="submit" value="Add" onclick="addFunction()" /> </td> </tr> </tbody> </table> i want to insert new TR element next to the element which has triggered the event... NOTE: i am not using any javascript library, just plain javascript

    Read the article

  • SQL Query to duplicate records based on If statement

    - by user328371
    Hi, I'm trying to write an SQL query that will duplicate records depending on a field in another table. I am running mySQL 5. (I know duplicating records shows that the database structure is bad, but I did not design the database and am not in a position to redo it all - it's a shopp ecommerce database running on wordpress.) Each product with a particular attribute needs a link to the same few images, so the product will need a row per image in a table - the database doesn't actually contain the image, just its filename. (the images are of clipart for a customer to select from) Based on these records... SELECT * FROM `wp_shopp_spec` WHERE name='Can Be Personalised' and content='Yes' I want to do something like this.. For each record that matches that query, copy records 5134 - 5139 from wp_shopp_asset but change the id so it's unique and set the cell in column 'parent' to have the value of 'product' from the table wp_shopp_spec. This will mean 6 new records are created for each record matching the above query, all with the same value in 'parent' but with unique ids and every other column copied from the original (ie. records 5134-5139) Hope that's clear enough - any help greatly appreciated.

    Read the article

  • Performance of VIEW vs. SQL statement

    - by Matt W.
    I have a query that goes something like the following: select <field list> from <table list> where <join conditions> and <condition list> and PrimaryKey in (select PrimaryKey from <table list> where <join list> and <condition list>) and PrimaryKey not in (select PrimaryKey from <table list> where <join list> and <condition list>) The sub-select queries both have multiple sub-select queries of their own that I'm not showing so as not to clutter the statement. One of the developers on my team thinks a view would be better. I disagree in that the SQL statement uses variables passed in by the program (based on the user's login Id). Are there any hard and fast rules on when a view should be used vs. using a SQL statement? What kind of performance gain issues are there in running SQL statements on their own against regular tables vs. against views. (Note that all the joins / where conditions are against indexed columns, so that shouldn't be an issue.) EDIT for clarification... Here's the query I'm working with: select obj_id from object where obj_id in( (select distinct(sec_id) from security where sec_type_id = 494 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) and sec_obj_id in ( select obj_id from object where obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) ) ) ) or (obj_ot_id in (select of_ot_id from obj_form left outer join obj_type on ot_id = of_ot_id where ot_app_id = 87 and of_id in (select sec_obj_id from security where sec_type_id = 493 and ( (sec_usergroup_id = 3278 and sec_usergroup_type_id = 230) or (sec_usergroup_id in (select ug_gi_id from user_group where ug_ui_id = 3278) and sec_usergroup_type_id = 231) ) ) and of_usage_type_id = 131 ) and obj_id not in (select sec_obj_id from security where sec_type_id = 494) )

    Read the article

  • Getting wierd issue with TO_NUMBER function in Oracle

    - by Fazal
    I have been getting an intermittent issue when executing to_number function in the where clause on a varchar2 column if number of records exceed a certain number n. I used n as there is no exact number of records on which it happens. On one DB it happens after n was 1 million on another when it was 0.1. million. E.g. I have a table with 10 million records say Table Country which has field1 varchar2 containing numberic data and Id If I do a query as an example select * from country where to_number(field1) = 23 and id 1 and id < 100000 This works But if i do the query select * from country where to_number(field1) = 23 and id 1 and id < 100001 It fails saying invalid number Next I try the query select * from country where to_number(field1) = 23 and id 2 and id < 100001 It works again As I only got invalid number it was confusing, but in the log file it said Memory Notification: Library Cache Object loaded into SGA Heap size 3823K exceeds notification threshold (2048K) KGL object name :with sqlplan as ( select c006 object_owner, c007 object_type,c008 object_name from htmldb_collections where COLLECTION_NAME='HTMLDB_QUERY_PLAN' and c007 in ('TABLE','INDEX','MATERIALIZED VIEW','INDEX (UNIQUE)')), ws_schemas as( select schema from wwv_flow_company_schemas where security_group_id = :flow_security_group_id), t as( select s.object_owner table_owner,s.object_name table_name, d.OBJECT_ID from sqlplan s,sys.dba_objects d It seems its related to SGA size, but google did not give me much help on this. Does anyone have any idea about this issue with TO_NUMBER or oracle functions for large data?

    Read the article

  • dynamic Grid columns

    - by stck777
    Hi, I need help with dynamic columns in a DataGrid. I use GenericFrame front-end with PHP backend. If I use static columns like this: <? ... ?> <DataGrid id="DataGrid1" width="100%"> <columns> <DataGridColumn headerText="name" dataField="@username" width="150"/> <DataGridColumn headerText="Nahcname" dataField="@secondname" width="150"/> <DataGridColumn headerText="alter" dataField="@age" width="40"/> </columns> </DataGrid> <? ... ?> It is working fine. But I try to create the columns dynamic with PHP. <generic> <template target="gridbox"> <VBox id="dynamic" height="100%"> <!-- DataGrid --> <DataGrid id="DataGrid1" width="100%" > <columns> <?php $columns = array( //Spalte => (Breite, Datenfeld) "name" => array(150,"@username"), "Nahcname" => array(150,"@secondname"), "alter"=> array(40,"@age") ); foreach ($columns as $key => $value) { ?> <DataGridColumn headerText="<? echo $key; ?>" dataField="<? echo $value[0]; ?>" width="<? echo $value[0];?>"/> <?php } ?> </columns> </DataGrid> <Binding source="templatedata.data1.item" destination="DataGrid1.dataProvider" /> </VBox> </template> <templatedata> <data1> <!-- Daten --> <item username="User1" secondname="Nachname1" age="22"/> <item username="User2" secondname="Nachname2" age="25"/> <item username="User3" secondname="Nachname3" age="27"/> <item username="User4" secondname="Nachname4" age="32"/> </data1> </templatedata> The DataGrid is displayed correctly, but without data? any idea why?

    Read the article

  • How to compare 2 complex spreadsheets running in parallel for consistency with each other?

    - by tbone
    I am working on converting a large number of spreadsheets to use a new 3rd party data access library (converting from third party library #1 to third party library #2). fyi: a call to a UDF (user defined function) is placed in a cell, and when that is refreshed, it pulls the data into a pivot table below the formula. Both libraries behave the same and produce the same output, except, small irregularites can arise, such as an additional field being shown in the output pivot table using library #2, which can affect formulas on the sheet if data is being read from the pivot table without using GetPivotData. So I have ~100 of these very complicated (20+ worksheets per workbook) spreadsheets that I have to convert, and run in parallel for a period of time, to see if the output using the new data access library matches the old library. Is there some clever approach to do this, so I don't have to spend a large amount of time analyzing each sheet to determine the specific elements to compare? Two rough ideas that come to mind: 1. just create a Validator workbook that has the same # of worksheets, and simply do a Worbook1!Worksheet1!A1 - Worbook2!Worksheet3!A1 for every possible cell on each sheet 2. roughly the equivalent of #1, but just traverse the cells in the 2 books using VBA, and log any cells that do not match. I don't particularly like either idea, can anyone think of something better than this, maybe some 3rd party utility I could buy?

    Read the article

  • PHP/MySQL allowing current user to edit there account information

    - by user1837896
    i have created 2 pages update.php edit.php we start on edit.php so here is edit.php's script <?php session_start(); $id = $_SESSION["id"]; $username = $_POST["username"]; $fname = $_POST["fname"]; $password = $_POST["password"]; $email = $_POST["email"]; mysql_connect('mysql13.000webhost.com', 'a2670376_Users', 'Password') or die(mysql_error()); echo "MySQL Connection Established! <br>"; mysql_select_db("a2670376_Pass") or die(mysql_error()); echo "Database Found! <br>"; $query = "UPDATE members SET username = '$username', fname = '$fname', password = '$password' WHERE id = '$id'"; $res = mysql_query($query); if ($res) echo "<p>Record Updated<p>"; else echo "Problem updating record. MySQL Error: " . mysql_error(); ?> <form action="update.php" method="post"> <input type="hidden" name="id" value="<?=$id;?>"> ScreenName:<br> <input type='text' name='username' id='username' maxlength='25' style='width:247px' name="username" value="<?=$username;?>"/><br> FullName:<br> <input type='text' name='fname' id='fname' maxlength='20' style='width:248px' name="ud_img" value="<?=$fname;?>"/><br> Email:<br> <input type='text' name='email' id='email' maxlength='50' style='width:250px' name="ud_img" value="<?=$email;?>"/><br> Password:<br> <input type='text' name='password' id='password' maxlength='25' style='width:251px' value="<?=$password;?>"/><br> <input type="Submit"> </form> now here is the update.php page where i am having the MAJOR problem <?php session_start(); mysql_connect('mysql13.000webhost.com', 'a2670376_Users', 'Password') or die(mysql_error()); mysql_select_db("a2670376_Pass") or die(mysql_error()); $id = (int)$_SESSION["id"]; $username = mysql_real_escape_string($_POST["username"]); $fname = mysql_real_escape_string($_POST["fname"]); $email = mysql_real_escape_string($_POST["email"]); $password = mysql_real_escape_string($_POST["password"]); $query="UPDATE members SET username = '$username', fname = '$fname', email = '$email', password = '$password' WHERE id='$id'"; mysql_query($query)or die(mysql_error()); if(mysql_affected_rows()>=1){ echo "<p>($id) Record Updated<p>"; }else{ echo "<p>($id) Not Updated<p>"; } ?> now on edit.php i fill out the form to edit the account "test" while i am logged into it now once the form if filled out i click on |Submit!| button and it takes me to update.php and it returns this (0) Not Updated (0) <= id of user logged in Not Updated <= MySql Error from mysql_query($query)or die(mysql_error()); if(mysql_affected_rows()>=1){ i want it to update the user logged in and if i am not mistaken in this script it says $id = (int)$_SESSION["id"]; witch updates the user with the id of the person who is logged in but it isnt updating its saying that no tables were effected if it helps heres my MySql Database picture just click here http://i50.tinypic.com/21juqfq.png if this could possibly be any help to find the solution i have 2 more files delete.php and delete_ac.php they have can remove users from my sql database and they show the user id and it works there are no bugs in this script at all PLEASE DO NOT MAKE SUGGESTIONS FOR THE SCRIPTS BELOW delete.php first <?php $host="mysql13.000webhost.com"; // Host name $username="a2670376_Users"; // Mysql username $password="PASSWORD"; // Mysql password $db_name="a2670376_Pass"; // Database name $tbl_name="members"; // Table name // Connect to server and select database. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // select record from mysql $sql="SELECT * FROM $tbl_name"; $result=mysql_query($sql); ?> <table border="0" cellpadding="3" cellspacing="1" bgcolor="#CCCCCC"> <tr> <td colspan="8" style="bgcolor: #FFFFFF"><strong><img src="http://i47.tinypic.com/u6ihk.png" height="30" widht="30">Delete data in mysql</strong> </td> </tr> <tr> <td align="center" bgcolor="#FFFFFF"><strong>Id</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>UserName</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>FullName</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>Password</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>Email</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>Date</strong></td> <td align="center" bgcolor="#FFFFFF"><strong>Ip</strong></td> <td align="center" bgcolor="#FFFFFF">&nbsp;</td> </tr> <?php while($rows=mysql_fetch_array($result)){ ?> <tr> <td bgcolor="#FFFFFF"><? echo $rows['id']; ?></td> <td bgcolor="#FFFFFF"><? echo $rows['username']; ?></td> <td bgcolor="#FFFFFF"><? echo $rows['fname']; ?></td> <td bgcolor="#FFFFFF"><? echo $rows['password']; ?></td> <td bgcolor="#FFFFFF"><? echo $rows['email']; ?></td> <td bgcolor="#FFFFFF"><? echo $rows['date']; ?></td> <td bgcolor="#FFFFFF"><? echo $rows['ip']; ?></td> <td bgcolor="#FFFFFF"><a href="delete_ac.php?id=<? echo $rows['id']; ?>">delete</a></td> </tr> <?php // close while loop } ?> </table> <?php // close connection; sql_close(); ?> and now delete_ac.php <table width="500" border="0" cellpadding="3" cellspacing="1" bgcolor="#CCCCCC"> <tr> <td colspan="8" bgcolor="#FFFFFF"><strong><img src="http://t2.gstatic.com/images? q=tbn:ANd9GcS_kwpNSSt3UuBHxq5zhkJQAlPnaXyePaw07R652f4StmvIQAAf6g" height="30" widht="30">Removal Of Account</strong> </td> </tr> <tr> <td align="center" bgcolor="#FFFFFF"> <?php $host="mysql13.000webhost.com"; // Host name $username="a2670376_Users"; // Mysql username $password="javascript00"; // Mysql password $db_name="a2670376_Pass"; // Database name $tbl_name="members"; // Table name // Connect to server and select databse. mysql_connect("$host", "$username", "$password")or die("cannot connect"); mysql_select_db("$db_name")or die("cannot select DB"); // get value of id that sent from address bar $id=$_GET['id']; // Delete data in mysql from row that has this id $sql="DELETE FROM $tbl_name WHERE id='$id'"; $result=mysql_query($sql); // if successfully deleted if($result){ echo "Deleted Successfully"; echo "<BR>"; echo "<a href='delete.php'>Back to main page</a>"; } else { echo "ERROR"; } ?> <?php // close connection mysql_close(); ?> </td> </tr> </table>

    Read the article

  • How do I update a NSTableView when its data source has changed?

    - by Jergason
    I am working along with Cocoa Programming For Mac OS X (a great book). One of the exercises the book gives is to build a simple to-do program. The UI has a table view, a text field to type in a new item and an "Add" button to add the new item to the table. On the back end I have a controller that is the data source and delegate for my NSTableView. The controller also implements an IBAction method called by the "Add" button. It contains a NSMutableArray to hold the to do list items. When the button is clicked, the action method fires correctly and the new string gets added to the mutable array. However, my data source methods are not being called correctly. Here they be: - (NSInteger)numberOfRowsInTableView:(NSTableView *)aTableView { NSLog(@"Calling numberOfRowsInTableView: %d", [todoList count]); return [todoList count]; } - (id)tableView:(NSTableView *)aTableView objectValueForTableColumn:(NSTableColumn *)aTableColumn row:(NSInteger)rowIndex { NSLog(@"Returning %@ to be displayed", [todoList objectAtIndex:rowIndex]); return [todoList objectAtIndex:rowIndex]; } Here is the rub. -numberOfRowsInTableView only gets called when the app first starts, not every time I add something new to the array. -objectValueForTableColumn never gets called at all. I assume this is because Cocoa is smart enough to not call this method when there is nothing to draw. Is there some method I need to call to let the table view know that its data source has changed, and it should redraw itself?

    Read the article

  • Getting a Specified Cast is not valid while importing data from Excel using Linq to SQL

    - by niceoneishere
    This is my second post. After learning from my first post how fantastic is to use Linq to SQL, I wanted to try to import data from a Excel sheet into my SQL database. First My Excel Sheet: it contains 4 columns namely ItemNo ItemSize ItemPrice UnitsSold I have a created a database table with the following fields table name ProductsSold Id int not null identity --with auto increment set to true ItemNo VarChar(10) not null ItemSize VarChar(4) not null ItemPrice Decimal(18,2) not null UnitsSold int not null Now I created a dal.dbml file based on my database and I am trying to import the data from excel sheet to db table using the code below. Everything is happening on click of a button. private const string forecast_query = "SELECT ItemNo, ItemSize, ItemPrice, UnitsSold FROM [Sheet1$]"; protected void btnUpload_Click(object sender, EventArgs e) { var importer = new LinqSqlModelImporter(); if (fileUpload.HasFile) { var uploadFile = new UploadFile(fileUpload.FileName); try { fileUpload.SaveAs(uploadFile.SavePath); if(File.Exists(uploadFile.SavePath)) { importer.SourceConnectionString = uploadFile.GetOleDbConnectionString(); importer.Import(forecast_query); gvDisplay.DataBind(); pnDisplay.Visible = true; } } catch (Exception ex) { Response.Write(ex.Source.ToString()); lblInfo.Text = ex.Message; } finally { uploadFile.DeleteFileNoException(); } } } // Now here is the code for LinqSqlModelImporter public class LinqSqlModelImporter : SqlImporter { public override void Import(string query) { // importing data using oledb command and inserting into db using LINQ to SQL using (var context = new WSDALDataContext()) { using (var myConnection = new OleDbConnection(base.SourceConnectionString)) using (var myCommand = new OleDbCommand(query, myConnection)) { myConnection.Open(); var myReader = myCommand.ExecuteReader(); while (myReader.Read()) { context.ProductsSolds.InsertOnSubmit(new ProductsSold() { ItemNo = myReader.GetString(0), ItemSize = myReader.GetString(1), ItemPrice = myReader.GetDecimal(2), UnitsSold = myReader.GetInt32(3) }); } } context.SubmitChanges(); } } } can someone please tell me where am I making the error or if I am missing something, but this is driving me nuts. When I debugged I am getting this error when casting from a number the value must be a number less than infinity I really appreciate it

    Read the article

  • Serializing and deserializing a map with key as string

    - by Grace K
    Hi! I am intending to serialize and deserialize a hashmap whose key is a string. From Josh Bloch's Effective Java, I understand the following. P.222 "For example, consider the case of a harsh table. The physical representation is a sequence of hash buckets containing key-value entries. Which bucket an entry is placed in is a function of the hash code of the key, which is not, in general guaranteed to be the same from JVM implementation to JVM implementation. In fact, it isn't even guranteed to be the same from run to run on the same JVM implementation. Therefore accepting the default serialized form for a hash table would constitute a serious bug. Serializing and deserializing the hash table could yield an object whose invariants were seriously corrupt." My questions are: 1) In general, would overriding the equals and hashcode of the key class of the map resolve this issue and the map can be correctly restored? 2) If my key is a String and the String class is already overriding the hashCode() method, would I still have problem described above. (I am seeing a bug which makes me think this is probably still a problem even though the key is String with overriding hashCode.) 3)Previously, I get around this issue by serializing an array of entries (key, value) and when deserializing I would reconstruct the map. I am wondering if there is a better approach. 4) If the answers to question 1 and 2 are that I still can't be guaranteed. Could someone explain why? If the hashCodes are the same would they go to the same buckets across JVMs? Thanks, Grace

    Read the article

  • comma separated values in oracle function body

    - by dmitry
    I've got following oracle function but it does not work and errors out. I used Ask Tom's way to convert comma separated values to be used in select * from table1 where col1 in <> declared in package header: TYPE myTableType IS table of varchar2 (255); Part of package body: l_string long default iv_value_with_comma_separated|| ','; l_data myTableType := myTableType(); n NUMBER; begin begin LOOP EXIT when l_string is null; n := instr( l_string, ',' ); l_data.extend; l_data(l_data.count) := ltrim( rtrim( substr( l_string, 1, n-1 ) ) ); l_string := substr( l_string, n+1 ); END LOOP; end; OPEN my_cursor FOR select * from table_a where column_a in (select * from table (l_data)); CLOSE my_cursor END; above fails but it works fine when I remove select * from table (l_data) Can someone please tell me what I might be doing wrong here??

    Read the article

  • Hibernate MapKeyManyToMany gives composite key where none exists

    - by larsrc
    I have a Hibernate (3.3.1) mapping of a map using a three-way join table: @Entity public class SiteConfiguration extends ConfigurationSet { @ManyToMany @MapKeyManyToMany(joinColumns=@JoinColumn(name="SiteTypeInstallationId")) @JoinTable( name="SiteConfig_InstConfig", joinColumns = @JoinColumn(name="SiteConfigId"), inverseJoinColumns = @JoinColumn(name="InstallationConfigId") ) Map<SiteTypeInstallation, InstallationConfiguration> installationConfigurations = new HashMap<SiteTypeInstallation, InstallationConfiguration>(); ... } The underlying table (in Oracle 11g) is: Name Null Type ------------------------------ -------- ---------- SITECONFIGID NOT NULL NUMBER(19) SITETYPEINSTALLATIONID NOT NULL NUMBER(19) INSTALLATIONCONFIGID NOT NULL NUMBER(19) The key entity used to have a three-column primary key in the database, but is now redefined as: @Entity public class SiteTypeInstallation implements IdResolvable { @Id @GeneratedValue(generator="SiteTypeInstallationSeq", strategy= GenerationType.SEQUENCE) @SequenceGenerator(name = "SiteTypeInstallationSeq", sequenceName = "SEQ_SiteTypeInstallation", allocationSize = 1) long id; @ManyToOne @JoinColumn(name="SiteTypeId") SiteType siteType; @ManyToOne @JoinColumn(name="InstalationRoleId") InstallationRole role; @ManyToOne @JoinColumn(name="InstallationTypeId") InstType type; ... } The table for this has a primary key 'Id' and foreign key constraints+indexes for each of the other columns: Name Null Type ------------------------------ -------- ---------- SITETYPEID NOT NULL NUMBER(19) INSTALLATIONROLEID NOT NULL NUMBER(19) INSTALLATIONTYPEID NOT NULL NUMBER(19) ID NOT NULL NUMBER(19) For some reason, Hibernate thinks the key of the map is composite, even though it isn't, and gives me this error: org.hibernate.MappingException: Foreign key (FK1A241BE195C69C8:SiteConfig_InstConfig [SiteTypeInstallationId])) must have same number of columns as the referenced primary key (SiteTypeInstallation [SiteTypeId,InstallationRoleId]) If I remove the annotations on installationConfigurations and make it transient, the error disappears. I am very confused why it thinks SiteTypeInstallation has a composite key at all when @Id is clearly defining a simple key, and doubly confused why it picks exactly just those two columns. Any idea why this happens? Is it possible that JBoss (5.0 EAP) + Hibernate somehow remembers a mistaken idea of the primary key across server restarts and code redeployments? Thanks in advance, -Lars

    Read the article

  • Problem with Active Record

    - by kshchepelin
    Hello everyone. Lets assume we have a User model. And user can plan some activities. The number of types of activities is about 50. All activities have common properties, such as start_time, end_time, user_id, etc. But each of them has some unique properties. Now we have each activity living in its own table in DB. And thats why we have such terrible sql queries like SELECT * FROM `first_activities_table` WHERE (`first_activity`.`id` IN (17,18)) SELECT * FROM `second_activities_table` WHERE (`second_activity`.`id` = 17) ..... SELECT * FROM `n_activities_table` WHERE (`n_activity`.`id` = 44) About 50 queries. That's terrible. There are different ways to solve this. Choose the activity type with the biggest number of properties, create the table 'Activities' and have STI model. But this way we must name our columns in uncomfortable way and often the record in that table would have some NULL fields. Also STI model, but having columns, common for all of activity types and some blob column with serialized properties. But we have to do some search on activities - there can be a problem. And serialization is quite slow. Please help me dealing with this. Maybe my problem has quite different solution that will fit my needs. Thanks for help.

    Read the article

< Previous Page | 409 410 411 412 413 414 415 416 417 418 419 420  | Next Page >