Search Results

Search found 79588 results on 3184 pages for 'sql data storage'.

Page 577/3184 | < Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >

  • Image insert problem php sqlserver

    - by Termedi
    Hi I can not save image inside a sql server 2000 database. datatype is image here is the code: // Image Upload <?php include('config.php'); if(is_uploaded_file($_FILES['userfile']['tmp_name'])) { $fileName = $_FILES['userfile']['name']; $tmpName = $_FILES['userfile']['tmp_name']; $fileSize = $_FILES['userfile']['size']; $fileType = $_FILES['userfile']['type']; $size = filesize($tmpName); set_magic_quotes_runtime(0);//to desactive the default escape spacials caracters made by PHP in the externes files $img_binaire = base64_encode(fread(fopen(str_replace("'","''",$tmpName), "r"), $size)); $query = "INSERT INTO test_image (image_name, image_content, image_size) ". "VALUES ('{$fileName}','{$img_binaire}', '{$size}')"; odbc_exec($conn, $query) or die('Error, query failed'); echo "<br>File $fileName uploaded<br>"; echo "<br>File Size: $fileSize <br>"; } ?> // Image Show <?php include('config.php'); $sql = "select * from test_image where id =2"; $rsl = odbc_exec($conn, $sql); $image_info = odbc_fetch_array($rsl); //$count = sizeof($image_info['image_content']); //header('Accept-Ranges: bytes'); //header('Content-Length: '.$image_info['image_size']); //header("Content-length: 17397"); header('Content-Type: image/jpeg'); echo base64_decode($image_info['image_content']); //echo bindec($image_info['image_content']); ?> Error: Warning: odbc_exec() [function.odbc-exec]: SQL error: [Microsoft][ODBC SQL Server Driver][SQL Server]Operand type clash: text is incompatible with image, SQL state 22005 in SQLExecDirect in C:\xampp\htdocs\test\upload.php on line 25 Error, query failed

    Read the article

  • Implementing a logging library in .NET with a database as the storage medium

    - by Dave
    I'm just starting to work on a logging library that everyone can use to keep track of any sort of system information while the user is running our application. The simplest example so far is to track Info, Warnings, and Errors. I want all plugins to be able to use this feature, but since each developer might have a different idea of what's important to report, I want to keep this as generic as possible. In the C++ world, I would normally use something like a stl::pair<string,string> to act as a key value pair structure, and have a stl::list of these to act as a "row" in the log. The log cache would then be a list<list<pair<string,string>>> (ugh!). This way, the developers can use a const string key like INFO, WARNING, ERROR to have a consistent naming for a column in the database (for SELECTing specific types of information). I'd like the database to be able to deal with any number of distinct column names. For example, John might have an INFO row with a column called USER, and Bill might have an INFO row with a column called FILENAME. I want the log viewer to be able to display all information, and if one report doesn't have a value for INFO / FILENAME, those fields should just appear blank. So one option is to use List<List<KeyValuePair<String,String>>, and the another is to have the log library consumer somehow "register" its schema, and then have the database do an ALTER TABLE to handle this situation. Yet another idea is to have a table that's just for key value pairs, with a foreign key that maps the key value pairs back to the original log entry. I obviously don't want logging to bog down the system, so I only lock the log cache to make a copy of the data (and remove the already-copied data), then a background thread will dump the information to the database. My specific questions regarding this are: Do you see any performance issues? In other words, have you ever tried something like this and found that certain things just don't work well in practice? Is there a more .NETish way to implement the key value pairs, other than List<List<KeyValuePair<String,String>>>? Even if there is a way to do #2 better, is the ALTER TABLE idea I proposed above a Bad Thing? Would you recommend multiple databases over a single one? I don't yet have an idea of how frequently the log would get written to, but we ideally would like to have lots of low level information. Perhaps there should be a DB with a fixed schema only for the low level stuff, and then another DB that's more flexible for reporting information back to users.

    Read the article

  • World's Most Challening MySQL SQL Query (least I think so...)

    - by keruilin
    Whoever answers this question can claim credit for solving the world's most challenging SQL query, according to yours truly. Working with 3 tables: users, badges, awards. Relationships: user has many awards; award belongs to user; badge has many awards; award belongs to badge. So badge_id and user_id are foreign keys in the awards table. The business logic at work here is that every time a user wins a badge, he/she receives it as an award. A user can be awarded the same badge multiple times. Each badge is assigned a designated point value (point_value is a field in the badges table). For example, BadgeA can be worth 500 Points, BadgeB 1000 Points, and so on. As further example, let's say UserX won BadgeA 10 times and BadgeB 5 times. BadgeA being worth 500 Points, and BadgeB being worth 1000 Points, UserX has accumulated a total of 10,000 Points ((10 x 500) + (5 x 1000)). The end game here is to return a list of top 50 users who have accumulated the most badge points. Can you do it?

    Read the article

  • MS Acess SQL - where clause to get year from date based on the year - data located in MS access form

    - by primus285
    OK, back again. I have a problem getting a drop down list to populate based on information in two fields. I have got the SQL correct as to Select just one year if I put DateValue('01/01/2001') in both places, but I am trying now to get it to grab the year data from the MS access form - another drop down named "cboYear". I'd hate to have to do something in VB, unless necessary. so far I have gotten this to pull up something (its always incorrect) SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' & [cboYear]) And Database_New.Date<=DateValue('12/31/' & [cboYear]); and SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' + [cboYear]) And Database_New.Date<=DateValue('12/31/' + [cboYear]); SELECT DISTINCT Database_New.ASEC FROM Database_New WHERE Database_New.Date>=DateValue('01/01/' AND [cboYear]) And Database_New.Date<=DateValue('12/31/' AND [cboYear]); both give errors saying that it is too complex to compute. Its probably something simple, but where do I go from here?

    Read the article

  • On-demand refresh mode for indexed view (=Materialized views) on SQL Server?

    - by MOLAP
    I know Oracle offers several refreshmode options for their materialized views (on demand, on commit, periodically). Does Microsoft SQLServer offer the same functions for their indexed views? If not, how can I else use indexed views on SQLServer if my purpose is to export data on a daily+ on-demand basis, and want to avoid performance overhead problems? Does a workaround exist?

    Read the article

  • Can't get the JSONP workin' with WCF Data Services...

    - by SevilNatas
    What am I missing here? It seems from all that I read and watched, exposing JSON from a WCF Data Service should be as easy as adding the '[JSONPSupportBehavior]' directive in front of the Service Class. Problem is VS2010 doesn't recognize 'JSONPSupportBehavior'. Is there a reference I am missing? Seemed like, from alll the articles, it was supportted outta the box. TIA S.

    Read the article

  • Best way to use PL/SQL Pacakge Cursors from Pro*C

    - by Greg Reynolds
    I have a cursor defined in PL/SQL, and I am wondering what the best way to use it from Pro*C is. Normally for a cursor defined in Pro*C you would do: EXEC SQL DECLARE curs CURSOR FOR SELECT 1 FROM DUAL; EXEC SQL OPEN curs; EXEC SQL FETCH curs INTO :foo; EXEC SQL CLOSE cusr; I was hoping that the same (or similar) syntax would work for a packaged cursor. For example, I have a package MyPack, with a declaration type MyType is record (X integer); cursor MyCurs(x in integer) return MyType; Now I have in my Pro*C code a rather unsatisfying piece of embedded PL/SQL that opens the cursor, does the fetching etc., as I couldn't get the first style of syntax to work. Using the example EXEC SQL EXECUTE DECLARE XTable is table of MyPack.MyType; BEGIN OPEN MyPack.MyCurs(:param); FETCH MyPack.MyCurs INTO XTable; CLOSE MyPack.MyCurs; END; END-EXEC; Does anyone know if there is a more "Pure" Pro*C approach?

    Read the article

  • SQL 2005 - any way to restore/copy a diagram?

    - by NealWalters
    I used the Redgate packager (ran MSI) to reset all the data in my database (i.e. I deleted everything, and let it build the new database). Unfortunately, I discovered that it didn't retain my diagrams, which has a nice arrangement and several annotations. Is there any way to copy/migrate/script the diagram from one database to another (the databases have identical structures). Thanks, Neal Walters

    Read the article

  • What noncluster index would be better to create on SQL Server?

    - by Junior Mayhé
    Here I am studying nonclustered indexes on SQL Server Management Studio. I've created a table with more than 1 million records. This table has a primary key. SELECT CustomerName FROM Customers Which leads the execution plan to show me: I/O cost = 3.45646 Operator cost = 4.57715 For the first attempt to improve performance, I've created a nonclustered index for this table: CREATE NONCLUSTERED INDEX [IX_CustomerID_CustomerName] ON [dbo].[Customers] ( [CustomerId] ASC, [CustomerName] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this first try, I've executed the select statement and the execution plan shows me: I/O cost = 2.79942 Operator cost = 3.92001 Now the second try, I've deleted this nonclustered index in order to create a new one. CREATE NONCLUSTERED INDEX [IX_CategoryName] ON [dbo].[Categories] ( [CategoryId] ASC ) INCLUDE ( [CategoryName]) WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] GO With this second try, I've executed the select statement and the execution plan shows me the same result: I/O cost = 2.79942 Operator cost = 3.92001 Am I doing something wrong or this is expected? Shall I use the first nonclustered index with two fields, or the second nonclustered with one field (CategoryID) including the second field (CategoryName)?

    Read the article

  • ASP.NET Dynamic Data field value disappears in the browser.

    - by ProfK
    I have an ASP.NET Dynamic Data web application, with an entity called ActivationResource. One of the properties of this is a CellPhone field. Now, whenever I open a List or Details view of one of these entities, the cell phone number displays for a moment then disappears. Anyone have any ideas as to the cause of this mysterious behavior?

    Read the article

  • How can I use SQL to select duplicate records, along with counts of related items?

    - by mipadi
    I know the title of this question is a bit confusing, so bear with me. :) I have a (MySQL) database with a Person record. A Person also has a slug field. Unfortunately, slug fields are not unique. There are a number of duplicate records, i.e., the records have different IDs but the same first name, last name, and slug. A Person may also have 0 or more associated articles, blog entries, and podcast episodes. If that's confusing, here's a diagram of the structure: I would like to produce a list of records that match this criteria: duplicate records (i.e., same slug field) for people who also have at least 1 article, blog entry, or podcast episode. I have a SQL query that will list all records with the same slug fields: SELECT id, first_name, last_name, slug, COUNT(slug) AS person_records FROM people_person GROUP BY slug HAVING (COUNT(slug) > 1) ORDER BY last_name, first_name, id; But this includes records for people that may not have at least 1 article, blog entry, or podcast. Can I tweak this to fit the second criteria?

    Read the article

  • If I create a transient property in the model, isn't this managed by core data then?

    - by mystify
    Just to grok this: If I had a transient property, lets say averagePrice, and I mark that as "transient" in the data modeler: This will not be persistet, and no column will be created in SQLite for that? And: If I make my own NSManagedObject subclass with an averagePrice property, does it make any sense to model that property in the xcdatamodel file? Would it make a difference if I would simply create a property in my subclass and not model that in the entity? (I think: yes, it doesn't matter at all ... but not sure)

    Read the article

  • How do I write a Oracle SQl query for this tricky question...

    - by atrueguy
    Here is the table data with the column name as Ships. +--------------+ Ships | +--------------+ Duke of north | ---------------+ Prince of Wales| ---------------+ Baltic | ---------------+ In the Outcomes table, transform names of the ships containing more than one space, as follows: Replace all characters between the first and the last spaces (excluding these spaces) by symbols of an asterisk (*). The number of asterisks must be equal to number

    Read the article

  • What is a 'better' approach to query/save from server: DTO or Wcf Data Services?

    - by bonefisher
    From my perspective, the Data Services and their query approach is useful when querying simple object graphs from your server-side domain model. But when you want to query complex dependencies I couldn't create anything good out of it. The classic DTO approach is fine-grained and can handle everything, but the downside is that you have to create Dto classes for every type of server-request which is time consuming and you have to synchronize the Dto type with your domain entity/business logic.

    Read the article

  • Excel macro to change external data query connections - e.g. point from one database to another

    - by Rory
    I'm looking for a macro/vbs to update all the external data query connections to point at a different server or database. This is a pain to do manually and in versions of Excel before 2007 it sometimes seems impossible to do manually. Anyone have a sample? I see there are different types of connections 'OLEDB' and 'ODBC', so I guess I need to deal with different formats of connection strings?

    Read the article

  • Not able to insert data in the database from a form in php

    - by Prashant Baid
    I am not able to insert data into my data, i dont know what the problem is. Here is the code: mysql_select_db("mitestore", $con); */ if ((isset($_POST['product_name'])) && (strlen(trim($_POST['product_name'])) 0)) { $product_name = stripslashes(strip_tags($_POST['product_name'])); $sql="INSERT INTO sell (product_name) VALUE ('$_POST[product_name]')"; } else {$product_name = 'Please enter the product name.';} if ((isset($_POST[''])) && (strlen(trim($_POST['how_old'])) 0)) { $how_old = stripslashes(strip_tags($_POST['how_old'])); $sql="INSERT INTO sell (how_old) VALUE ('$_POST[how_old]')"; } else {$how_old = 'Please enter how old your product is';} if ((isset($_POST['which_block'])) && (strlen(trim($_POST['which_block'])) 0)) { $which_block = stripslashes(strip_tags($_POST['which_block'])); $sql="INSERT INTO sell (which_block) VALUE ('$_POST[which_block]')"; } else {$which_block = 'Please enter which block are you from';} if ((isset($_POST['room_no'])) && (strlen(trim($_POST['room_no'])) 0)) { $room_no = stripslashes(strip_tags($_POST['room_no'])); $sql="INSERT INTO sell (room_no) VALUE ('$_POST[room_no]')"; } else {$room_no = 'Please enter the room no:';} if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } echo "Success!"; mysql_close($con) ? Initially i had this code and it worked for me. mysql_select_db("database", $con); $sql="INSERT INTO sell ( product_name, how_old , selling_price, negotiable, which_block, room_no) VALUES ('$_POST[product_name]','$_POST[how_old]','$_POST[selling_price]','$_POST[negotiable]','$_POST[which_block]','$_POST[room_no]')"; if (!mysql_query($sql,$con)) { die('Error: ' . mysql_error()); } echo "Your product is added."; mysql_close($con) ? But i don't know how to validate each field individually.

    Read the article

  • Restoring dev db from production: Running a set of SQL scripts based on a list stored in a table?

    - by mattley
    I need to restore a backup from a production database and then automatically reapply SQL scripts (e.g. ALTER TABLE, INSERT, etc) to bring that db schema back to what was under development. There will be lots of scripts, from a handful of different developers. They won't all be in the same directory. My current plan is to list the scripts with the full filesystem path in table in a psuedo-system database. Then create a stored procedure in this database which will first run RESTORE DATABASE and then run a cursor over the list of scripts, creating a command string for SQLCMD for each script, and then executing that SQLCMD string for each script using xp_cmdshell. The sequence of cursor-sqlstring-xp_cmdshell-sqlcmd feels clumsy to me. Also, it requires turning on xp_cmdshell. I can't be the only one who has done something like this. Is there a cleaner way to run a set of scripts that are scattered around the filesystem on the server? Especially, a way that doesn't require xp_cmdshell?

    Read the article

  • Should I create a unique clustered index, or non-unique clustered index on this SQL 2005 table?

    - by Bremer
    I have a table storing millions of rows. It looks something like this: Table_Docs ID, Bigint (Identity col) OutputFileID, int Sequence, int …(many other fields) We find ourselves in a situation where the developer who designed it made the OutputFileID the clustered index. It is not unique. There can be thousands of records with this ID. It has no benefit to any processes using this table, so we plan to remove it. The question, is what to change it to… I have two candidates, the ID identity column is a natural choice. However, we have a process which does a lot of update commands on this table, and it uses the Sequence to do so. The Sequence is non-unique. Most records only contain one, but about 20% can have two or more records with the same Sequence. The INSERT app is a VB6 piece of crud throwing thousands insert commands at the table. The Inserted values are never in any particular order. So the Sequence of one insert may be 12345, and the next could be 12245. I know that this could cause SQL to move a lot of data to keep the clustered index in order. However, the Sequence of the inserts are generally close to being in order. All inserts would take place at the end of the clustered table. Eg: I have 5 million records with Sequence spanning 1 to 5 million. The INSERT app will be inserting sequence’s at the end of that range at any given time. Reordering of the data should be minimal (tens of thousands of records at most). Now, the UPDATE app is our .NET star. It does all UPDATES on the Sequence column. “Update Table_Docs Set Feild1=This, Field2=That…WHERE Sequence =12345” – hundreds of thousands of these a day. The UPDATES are completely and totally, random, touching all points of the table. All other processes are simply doing SELECT’s on this (Web pages). Regular indexes cover those. So my question is, what’s better….a unique clustered index on the ID column, benefiting the INSERT app, or a non-unique clustered index on the Sequence, benefiting the UPDATE app?

    Read the article

  • How do I write an Oracle SQL query for this tricky question?

    - by atrueguy
    Here is the table data with the column name as Ships. +--------------+ Ships | +--------------+ Duke of north | ---------------+ Prince of Wales| ---------------+ Baltic | ---------------+ Replace all characters between the first and the last spaces (excluding these spaces) by symbols of an asterisk (*). The number of asterisks must be equal to number of replaced characters.

    Read the article

< Previous Page | 573 574 575 576 577 578 579 580 581 582 583 584  | Next Page >