Search Results

Search found 28900 results on 1156 pages for 'sql 2005'.

Page 585/1156 | < Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >

  • How can I split a LINESTRING into two LINESTRINGs at a given point?

    - by sabbour
    Hello, I'm trying to write a function that will split a LINESTRING into two LINESTRINGs given the split point. What I'm trying to achieve is a function that given a LINESTRING and a distance, it will return N LINESTRINGS for the original linestring splitted at multiples of that distance. This is what I have so far (I'm using SQL Server Spatial Tools from CodePlex): DECLARE @testLine geography; DECLARE @slicePoint geography; SET @testLine = geography::STGeomFromText('LINESTRING (34.5157942 28.5039665, 34.5157079 28.504725, 34.5156881 28.5049565, 34.5156773 28.505082, 34.5155642 28.5054437, 34.5155498 28.5054899, 34.5154937 28.5058826, 34.5154643 28.5060218, 34.5153968 28.5063415, 34.5153322 28.5065338, 34.5152031 28.5069178, 34.5150603 28.5072288, 34.5148716 28.5075501, 34.5146106 28.5079974, 34.5143617 28.5083813, 34.5141373 28.5086414, 34.5139954 28.5088441, 34.5138874 28.5089983, 34.5138311 28.5091054, 34.5136783 28.5093961, 34.5134336 28.5097531, 34.51325 28.5100794, 34.5130256 28.5105078, 34.5128754 28.5107957, 34.5126258 28.5113222, 34.5123984 28.5117673)', 4326) DECLARE @pointOne geography; declare @result table (segment geography) DECLARE @sliceDistance float DECLARE @nextSliceAt float SET @sliceDistance = 100 -- slice every 100 meters SET @nextSliceAt = @sliceDistance SELECT @pointOne = @testLine.STStartPoint() WHILE(@nextSliceAt < @testLine.STLength()) BEGIN SELECT @slicePoint = dbo.LocateAlongGeog(@testLine,@nextSliceAt) DECLARE @subLineString geography; SET @subLineString = geography::STGeomFromText('LINESTRING (' + dbo.FloatToVarchar(@pointOne.Long) + ' ' + dbo.FloatToVarchar(@pointOne.Lat) + ',' + dbo.FloatToVarchar(@slicePoint.Long) + ' ' + dbo.FloatToVarchar(@slicePoint.Lat) +')', 4326) insert into @result SELECT @subLineString SET @pointOne = @slicePoint set @nextSliceAt = @nextSliceAt + @sliceDistance END SET @subLineString = geography::STGeomFromText('LINESTRING (' + dbo.FloatToVarchar(@pointOne.Long) + ' ' + dbo.FloatToVarchar(@pointOne.Lat) + ',' + dbo.FloatToVarchar(@testLine.STEndPoint().Long) + ' ' + dbo.FloatToVarchar(@testLine.STEndPoint().Lat) +')', 4326) insert into @result SELECT @subLineString select * from @result I know it is not the best looking code, but there is another problem. The above code approximates the resulting LINESTRING because it does not follow the curvature of the original LINESTRING as it only takes into consideration the start and end points when creating the new segment. Is there a way take a substring out of the original LINESTRING given the start and end points?

    Read the article

  • Compound Primary Key in Hibernate using Annotations

    - by Rich
    Hi, I have a table which uses two columns to represent its primary key, a transaction id and then the sequence number. I tried what was recommended http://docs.jboss.org/hibernate/stable/annotations/reference/en/html_single/#entity-mapping in section 2.2.3.2.2, but when I used the Hibernate session to commit this Entity object, it leaves out the TXN_ID field in the insert statement and only includes the BA_SEQ field! What's going wrong? Here's the related code excerpt: @Id @Column(name="TXN_ID") private long txn_id; public long getTxnId(){return txn_id;} public void setTxnId(long t){this.txn_id=t;} @Id @Column(name="BA_SEQ") private int seq; public int getSeq(){return seq;} public void setSeq(int s){this.seq=s;} And here are some log statements to show what exactly happens to fail: In createKeepTxnId of DAO base class: about to commit Transaction :: txn_id->90625 seq->0 ...<Snip>... Hibernate: insert into TBL (BA_ACCT_TXN_ID, BA_AUTH_SRVC_TXN_ID, BILL_SRVC_ID, BA_BILL_SRVC_TXN_ID, BA_CAUSE_TXN_ID, BA_CHANNEL, CUSTOMER_ID, BA_MERCHANT_FREETEXT, MERCHANT_ID, MERCHANT_PARENT_ID, MERCHANT_ROOT_ID, BA_MERCHANT_TXN_ID, BA_PRICE, BA_PRICE_CRNCY, BA_PROP_REQS, BA_PROP_VALS, BA_REFERENCE, RESERVED_1, RESERVED_2, RESERVED_3, SRVC_PROD_ID, BA_STATUS, BA_TAX_NAME, BA_TAX_RATE, BA_TIMESTAMP, BA_SEQ) values (?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?, ?) [WARN] util.JDBCExceptionReporter SQL Error: 1400, SQLState: 23000 [ERROR] util.JDBCExceptionReporter ORA-01400: cannot insert NULL into ("SCHEMA"."TBL"."TXN_ID") The important thing to note is I print out the entity object which has a txn_id set, and then the following insert into statement does not include TXN_ID in the listing and thus the NOT NULL table constraint rejects the query.

    Read the article

  • I want a insert query for a temp table

    - by John Stephen
    Hi..I am using C#.Net and Sql Server ( Windows Application ). I had created a temporary table. When a button is clicked, temporary table (#tmp_emp_answer) is created. I am having another button called "insert Values" and also 5 textboxes. The values that are entered in the textbox are used and whenever com.ExecuteNonQuery(); line comes, it throws an error message Invalid object name '#tbl_emp_answer'.. Below is the set of code.. Please give me a solution. Code for insert (in insert value button): private void btninsertvalues_Click(object sender, EventArgs e) { username = txtusername.Text; examloginid = txtexamloginid.Text; question = txtquestion.Text; answer = txtanswer.Text; useranswer = txtanswer.Text; SqlConnection con = new SqlConnection("Data Source=.;Initial Catalog=tempdb;Integrated Security=True;"); SqlCommand com = new SqlCommand("Insert into #tbl_emp_answer values('"+username+"','"+examloginid+"','"+question+"','"+answer+"','"+useranswer+"')", con); con.Open(); com.ExecuteNonQuery(); con.Close(); }

    Read the article

  • Does Microsoft Access use the PK fields for anything?

    - by chrismay
    Ok this is going to sound strange, but I have inherited an app that is an Access front end with a SQL Server backend. I am in the process of writing a new front end for it, but... we need to continue using the access front end for a while even after we deploy my new front end for reasons I won't go into. So both the existing Access app and my new app will need to be able to access and work with the data. The problem is the database design is a nightmare. For example some simple parent-child table relationships have like 4 and 5 part composite primary keys. I would REALLY like to remove these PKs and replace them with unique constraints or whatever, and add a new column to each of these tables called ID that is just an identity. If I change the PK and FKs on these tables to more managable fields, will the Access app have problems? What I mean is, does access use the meta data from the tables (PK and FK info) in such a way that it would break the app to change these?

    Read the article

  • How can i mock or test my deferred execution functionality?

    - by cottsak
    I have what could be seen as a bizarre hybrid of IQueryable<T> and IList<T> collections of domain objects passed up my application stack. I'm trying to maintain as much of the 'late querying' or 'lazy loading' as possible. I do this in two ways: By using a LinqToSql data layer and passing IQueryable<T>s through by repositories and to my app layer. Then after my app layer passing IList<T>s but where certain elements in the object/aggregate graph are 'chained' with delegates so as to defer their loading. Sometimes even the delegate contents rely on IQueryable<T> sources and the DataContext are injected. This works for me so far. What is blindingly difficult is proving that this design actually works. Ie. If i defeat the 'lazy' part somewhere and my execution happens early then the whole thing is a waste of time. I'd like to be able to TDD this somehow. I don't know a lot about delegates or thread safety as it applies to delegates acting on the same source. I'd like to be able to mock the DataContext and somehow trace both methods of deferring (IQueryable<T>'s SQL and the delegates) the loading so that i can have tests that prove that both functions are working at different levels/layers of the app/stack. As it's crucial that the deferring works for the design to be of any value, i'd like to see tests fail when i break the design at a given level (separate from the live implementation). Is this possible?

    Read the article

  • How to write a JOIN statement to combine data from disparate tables

    - by Amarundo
    I have the following 2 procedures that I use as my source for a report. As of now, I'm presenting 2 different tables in my SQL Server Reporting Services 2008 R2 report, because it doesn't let me put them together as they belong to 2 different data sets. I want to present them in a single table, but I have not been successful trying to use JOIN here. How do I do that? NOTE: cName in IAgentQueueStats corresponds to UserId in AgentActivityLog. /*** Aggregate values for Call Center Agents for calls, talk and hold time ***/ /*** The detail/row values is per 30-minute interval ***/ ALTER PROCEDURE [dbo].[sp_IAgentQueueStats_OnlyCalls_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [cName] ,sum([nAnswered]) SumNAnswered ,sum([nAnsweredAcd]) SumNAnsweredAcd ,sum([tTalkAcd]) SumTTalkAcd ,sum([nHoldAcd]) SumNHoldAcd ,sum([tHoldAcd]) SumTHoldAcd ,sum([tAcw]) SumTAcw FROM [I3_IC].[dbo].[IAgentQueueStats] WHERE dIntervalStart between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( cName ,@p_Agents)> 0 AND cReportGroup <> '*' AND cHKey3 = '*' and cHKey4 ='*' AND nEnteredAcd > 0 AND cReportGroup <> 'CCFax Email' GROUP BY cName And here is the second one: /*** Aggregate values for Call Center Agents for status/activity time ***/ /*** The detail/row values is per start-time/end-time ***/ ALTER PROCEDURE [dbo].[sp_AgentActivity_Grouped] @p_StartDate datetime, @p_EndDate datetime, @p_Agents varchar(8000) AS SELECT [UserId],[StatusCategory],SUM([StateDuration]) [StatusDuration] FROM ( SELECT [UserId] ,[StatusGroup] ,[StatusKey] , CASE [StatusKey] WHEN 'Available' THEN 'Productive' WHEN 'Follow Up' THEN 'Productive' WHEN 'Campaign Call' THEN 'Productive' WHEN 'Awaiting Callback' THEN 'Productive' WHEN 'In a Meeting' THEN 'Not Your Fault' WHEN 'Project Work' THEN 'Not Your Fault' WHEN 'At a Training Session'THEN 'Not Your Fault' WHEN 'System Issues' THEN 'Not Your Fault' WHEN 'Test' THEN 'Not Your Fault' WHEN 'At Lunch' THEN 'Non Productive' WHEN 'Available, Forward' THEN 'Non Productive' WHEN 'Available, Follow-Me' THEN 'Non Productive' WHEN 'At Play' THEN 'Non Productive' WHEN 'AcdAgentNotAnswering' THEN 'Non Productive' WHEN 'Do Not Disturb' THEN 'Non Productive' WHEN 'Available, No ACD' THEN 'Non Productive' WHEN 'Away from desk' THEN 'Non Productive' ELSE [StatusKey] END StatusCategory ,stateduration FROM [I3_IC].[dbo].[AgentActivityLog] WHERE [StatusDateTime] between @p_StartDate and DATEADD(s, 86400-1, @p_EndDate) AND CHARINDEX ( [UserId] ,@p_Agents)> 0 AND [StatusKey] not in ('Gone Home','Out of the Office','On Vacation','Out of Town') ) a GROUP BY [UserId],[StatusCategory] ORDER BY [UserId], [StatusCategory] desc BTW, if I take some time to comment/reply on your posts, it's not lack of interest, but of understanding...

    Read the article

  • I need to convert the result of a stored procedure in a dbml file to IQueryable to view a list in an

    - by RJ
    I have a MVC project that has a Linq to SQL dbml class. It is a table called Clients that houses client information. I can easily get the information to display in a View using the code I followed in Nerd Dinner but I have added a stored procedure to the dbml and it's result set is of IQueryable, not IQueryable. I need to convert IQueryable to IQueryable so I can display it in the same View. The reason for the sproc is so I can pass a search string tothe sproc and return the same information as a full list but filtered on the search. I know I can use Linq to filter the whole list but I don't want the whole list so I am using the sproc. Here is the code in my ClientRepository with a comment where I need to convert. What code goes in the commented spot. public IQueryable<Client> SelectClientsBySearch(String search) { IQueryable<SelectClientsBySearchResult> spClientList = (from p in db.SelectClientsBySearch(search) select p).AsQueryable(); //what is the code to convert IQueryable<SelectClientsBySearchResult> to IQueryable<Client> return clientList; }

    Read the article

  • Designing model/database for distance between any two locations (that may change)

    - by Yo Ludke
    We should create a web app which has a number of events each with a location (created as user-generated content, so the number of events will be increasingly large). The distance between any events should be available, for example to determine the top 5 closest events and such things. Users may change the locations of events. How should one design the database/model for this (in a scalable way)? I was thinking of doing it with a "distance table" (like so http://www.deutschland-tourist.info/images/entfernungstabelle.gif). Then every time, if a location changes, one row and one column have to be recalculated (this should be done with a delayed job, because it is not important to have the changes instantly). Possible problems in Scaling: Database to large (n² items for n events), too much calculation to be done. For example we should see if this is okay for 10.000 users. If each has created just one event, then this would be 100 million integers... Do you think this would be a good way to do it efficiently? How could one realize such a distance table with an rails model? Is it possible with a SQL databse? Would you start other approaches?

    Read the article

  • PHP/json My field names are being truncated to 30 characters. Can I stop this?

    - by Biff MaGriff
    Hi Everyone! Ok so I got this piece of vendor software that they said should be run on an apache php server and MySql database. I didn't have either of those so I put it on a PHP IIS server and I converted the code to work on SQL server. ex. mysql_select_db - mssql_select_db (among other things) So I have the following code in a php file $query = "SELECT * FROM TableName WHERE KEY_FIELD = '".$keyField."';"; $result = mssql_query($query); $arr = array(); while ( $obj = mssql_fetch_object($result) ) { $arr[] = $obj; } echo '{"results":'.json_encode($arr).'}'; and my results look something like this (captured with fiddler 2) {"results":[{"KEY_FIELD":"57", "My30characterlongfieldthatiscu":"GoodValue"}]} "My30characterlongfieldthatiscu" should be "My30characterlongfieldthatiscutoff" Kinda weird, no? The vendor claims that the app works perfectly on their end. I'm thinking this is some sort of IIS PHP limit, is there a way around it or can I expand it? I found this solution http://www.php.net/manual/en/ref.mssql.php#74834 but I don't understand it. Thanks!

    Read the article

  • NHibernate will not insert a record

    - by Brian Beckham
    I have an application that is now 4+ years old that is exhibiting some odd behavior on our latest deployment. The application uses nHibernate for all inserts / updates / selects, etc. We are currently using .NET 2.0, and nHibernate 1.2 (I know, we need to upgrade) This deployment is on Windows 2008 Server x64, IIS 7.5 - what I have seen so far is that the application runs, but is unable to insert or update records in the DB - reads seem fine so far, but writes are a problem. SOME writes actually work, inserts into some small tables, but most never even make it to the DB. Using SQL Profiler, the insert / updates never make it to the server, and turning log4net up to DEBUG, and show_sql true - the select statements appear, but the insert / update statements never make it into the log at all, and never show up at the server. What's even more odd is that the application seems to be oblivious to this - the commandandclose runs without exception (open session in view with an httpmodule), the domain objects come back with uuid's generated, etc. but never get persisted. Certainly an upgrade is due, but I would hate to try it during a deployment, and without time to accurately test the app. Any ideas?

    Read the article

  • reporting tool/viewer for large datasets

    - by FrustratedWithFormsDesigner
    I have a data processing system that generates very large reports on the data it processes. By "large" I mean that a "small" execution of this system produces about 30 MB of reporting data when dumped into a CSV file and a large dataset is about 130-150 MB (I'm sure someone out there has a bigger idea of "large" but that's not the point... ;) Excel has the ideal interface for the report consumers in the form of its Data Lists: users can filter and segment the data on-the-fly to see the specific details that they are interested in - they can also add notes and markup to the reports, create charts, graphs, etc... They know how to do all this and it's much easier to let them do it if we just give them the data. Excel was great for the small test datasets, but it cannot handle these large ones. Does anyone know of a tool that can provide a similar interface as Excel data lists, but that can handle much larger files? The next tool I tried was MS Access, and found that the Access file bloats hugely (30 MB input file leads to about 70 MB Access file, and when I open the file, run a report and close it the file's at 120-150 MB!), the import process is slow and very manual (currently, the CSV files are created by the same plsql script that runs the main process so there's next to no intervention on my part). I also tried an Access database with linked tables to the database tables that store the report data and that was many times slower (for some reason, sqlplus could query and generate the report file in a minute or soe while Access would take anywhere from 2-5 minutes for the same data) (If it helps, the data processing system is written in PL/SQL and runs on Oracle 10g.)

    Read the article

  • Viewing recently edited etherpads (note new 'etherpad' tag for open sourced etherpad code!)

    - by dreeves
    Prescript: The amazing etherpad was recently open sourced. Get it here: http://code.google.com/p/etherpad. This is the first question that I know of on StackOverflow about the etherpad code. If you're part of the etherpad open source community, you might want to subscribe to the RSS feed for questions tagged 'etherpad' just in case this catches on! My actual question, which assumes you have etherpad installed on your own server: First, here's a query to view recently edited pads: SELECT id,lastWriteTime,creationTime,headRev FROM PAD_SQLMETA ORDER BY lastWriteTime, headRev; Or, if you want to run it from a unix prompt: mysql -u root -pPASSWD etherpad -e "select id,lastWriteTime,creationTime,headRev from PAD_SQLMETA order by lastWriteTime, headRev" That's handy, however lastWriteTime actually gets updated every time someone so much as views a pad in their browser. I'd rather sort the pads by when they were actually last edited. There's probably a fancy SQL query involving a join with another table that would show actual last edit time. Does anyone know what that is? Alternatively, you could have a script that notices when headRev changes but that doesn't seem like the cleanest way to do it.

    Read the article

  • DBD::CSV: Problem with userdefined functions

    - by sid_com
    From the SQL::Statement::Functions documentation: Creating User-Defined Functions ... More complex functions can make use of a number of arguments always passed to functions automatically. Functions always receive these values in @_: sub FOO { my( $self, $sth, $rowhash, @params ); } #!/usr/bin/env perl use 5.012; use warnings; use strict; use DBI; my $dbh = DBI->connect( "DBI:CSV:", undef, undef, { RaiseError => 1, } ); my $table = 'wages'; my $array_ref = [ [ 'id', 'number' ], [ 0, 6900 ], [ 1, 3200 ], [ 2, 1800 ], ]; $dbh->do( "CREATE TEMP TABLE $table AS import( ? )", {}, $array_ref ); sub routine { my $self = shift; my $sth = shift; my $rowhash = shift; # return $_[0] / 30; }; $dbh->do( "CREATE FUNCTION routine" ); my $sth = $dbh->prepare( "SELECT id, routine( number ) AS result FROM $table" ); $sth->execute(); $sth->dump_results(); When I try this I get an error-message: DBD::CSV::st execute failed: Use of uninitialized value $_[0] in division (/) at ./so.pl line 27. [for Statement "SELECT id, routine( number ) AS result FROM "wages""] at ./so.pl line 34. When I comment out the third argument I works as expected ( because it looks as if the third argument is missing ): #!/usr/bin/env perl ... sub routine { my $self = shift; my $sth = shift; #my $rowhash = shift; return $_[0] / 30; }; ... 0, 230 1, 106.667 2, 60 3 rows Is this a bug?

    Read the article

  • MySQL: SELECT highest column value when WHERE finds similar entries

    - by Ike
    My question is comparable to this one, but not quite the same. I have a database with a huge amount of books, with different editions of some of the same book titles. I'm looking for an SQL statement giving me the highest edition number of each of the titles I'm selecting with a WHERE clause (to find specific book series). Here's what the table looks like: |id|title |edition|year| |--|-------------------------|-------|----| |01|Serie One Title One |1 |2007| |02|Serie One Title One |2 |2008| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |1 |2008| |06|Serie One Title Three |2 |2009| |07|Serie One Title Three |3 |2010| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The result I'm looking for is this: |id|title |edition|year| |--|-------------------------|-------|----| |03|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |08|Serie One Title Three |4 |2011| |--|-------------------------|-------|----| The closest I got was using this statement: select id, title, max(edition), max(year) from books where title like "serie one%" group by name; but it returns the highest edition and year and includes the first id it finds: |--|-----------------------|-------|----| |01|Serie One Title One |3 |2009| |04|Serie One Title Two |1 |2001| |05|Serie One Title Three |4 |2011| |--|-----------------------|-------|----| This fancy join also comes close, but doesn't give the right result: select b.id, b.title, b.edition, b.year from books b inner join (select name, max(edition) as maxedition from books group by title) g on b.edition = g.maxedition where b.title like "serie one%" group by title; Using this I'm getting unique titles, but mostly old editions.

    Read the article

  • How do I make software that preserves database integrity and correctness? Please help, confused.

    - by user287745
    i have made an application project in vs 08 c#, sql server from vs 08. the database has like 20 tables and many fields in each have made an interface for adding deleting editting and retrieving data according to predefined needs of the users. now i have to 1) make to project in to a software which i can deliver to professor. that is he can just double click the icon and the software simply starts. no vs 08 needed to start the debugging 2) the database will be on one powerful computer (dual core latest everything win xp) and the user will access it from another computer connected using LAN i am able to change the connection string to the shared database using vs 08/ debugger whenever the server changes but how am i supposed to do that when its a software? 3)there will by many clients am i supposed to give the same software to every one, so they all can connect to the database, how will the integrity and correctness of the database be maintained? i mean the db.mdf file will be in a folder which will be shared with read and write access. so its not necessary that only one user will write at a time. so is there any coding for this or? please help me out here i am stuck do not know what to do i have no practical experience, would appreciate all the help thank you

    Read the article

  • Large scale Merge Replication strategy - what can go wrong?

    - by niidto
    Hi, I'm developing a piece of software that uses Merge Replication and SQL Compact on Windows Mobile 6. At the moment it is running on 5 devices reasonably well. The issues I've come up against are as follows: The schema has had to change a lot, and will continue to have to change as the application evolves. There have been various errors replicating these schema changes down to the device, uploads failing due to schema inconsistencies. Subscriptions expiring (after 14 days) and unable to reinitialize with upload - AKA, potential data los of unsynced data up to that point. Basically, the worst case scenario is data loss, and when merge replication fails, there seems to be no way back to get the data off. My method until now has been to drop and create the subscription on the device. I don't hear many people doing this, though it seems to solve everything. The long term plan is to role this out to 500+ devices. Any advice on people who have undertaken similar projects, and how to minimise data loss and make it so that there's appropriate error handling code to recover from sync failures would be much appreciated. James

    Read the article

  • Faster Insertion of Records into a Table with SQLAlchemy

    - by Kyle Brandt
    I am parsing a log and inserting it into either MySQL or SQLite using SQLAlchemy and Python. Right now I open a connection to the DB, and as I loop over each line, I insert it after it is parsed (This is just one big table right now, not very experienced with SQL). I then close the connection when the loop is done. The summarized code is: log_table = schema.Table('log_table', metadata, schema.Column('id', types.Integer, primary_key=True), schema.Column('time', types.DateTime), schema.Column('ip', types.String(length=15)) .... engine = create_engine(...) metadata.bind = engine connection = engine.connect() .... for line in file_to_parse: m = line_regex.match(line) if m: fields = m.groupdict() pythonified = pythoninfy_log(fields) #Turn them into ints, datatimes, etc if use_sql: ins = log_table.insert(values=pythonified) connection.execute(ins) parsed += 1 My two questions are: Is there a way to speed up the inserts within this basic framework? Maybe have a Queue of inserts and some insertion threads, some sort of bulk inserts, etc? When I used MySQL, for about ~1.2 million records the insert time was 15 minutes. With SQLite, the insert time was a little over an hour. Does that time difference between the db engines seem about right, or does it mean I am doing something very wrong?

    Read the article

  • SQL query error while trying to put a file in the database

    - by DaGhostman Dimitrov
    Hey Guys I have a big problem that I have no Idea why.. I have few forms that upload files to the database, all of them work fine except one.. I use the same query in all(simple insert). I think that it has something to do with the files i am trying to upload but I am not sure. Here is the code: if ($_POST['action'] == 'hndlDocs') { $ref = $_POST['reference']; // Numeric value of $doc = file_get_contents($_FILES['doc']['tmp_name']); $link = mysqli_connect('localhost','XXXXX','XXXXX','documents'); mysqli_query($link,"SET autocommit = 0"); $query = "INSERT INTO documents ({$ref}, '{$doc}', '{$_FILES['doc']['type']}') ;"; mysqli_query($link,$query); if (mysqli_error($link)) { var_dump(mysqli_error($link)); mysqli_rollback($link); } else { print("<script> window.history.back(); </script>"); mysqli_commit($link); } } The database has only these fields: DATABASE documents ( reference INT(5) NOT NULL, //it is unsigned zerofill doc LONGBLOB NOT NULL, //this should contain the binary data mime_type TEXT NOT NULL // the mime type of the file php allows only application/pdf and image/jpeg ); And the error I get is : You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '00001, '????' at line 1 I will appreciate every help. Cheers!

    Read the article

  • Strange data swapping error occurs when I attempt to update rows in my table from another table in m

    - by Wesley
    So I have a table of data that is 10,000 lines long. Several of the columns in the table simply describe information about one of the columns, meaning, that only one column has the content, and the rest of the columns describe the location of the content (its for a book). Right now, only 6,000 of the 10,000 rows' content column is filled with its content. Rows 6-10,000's content column simply says null. I have another table in the db that has the content for rows 6,000-10,000, with the correct corresponding primary key which would (seemingly) make it easy to update the 10,000 row table. I have been trying an update query such as the following: UPDATE table(10,000) SET content_column = (SELECT content FROM table(6,000-10,000) WHERE table(10,000).id = table(6-10,000.id) Which kind of works, the only problem is that it pulls in the data from the second table just fine, but it replaces the existing content column with null. So rows 1-6,000's content column become null, and rows 6-10,000's content column have the correct values...Pretty strange I thought anyway. Does anybody have any thoughts about where I am going wrong? If you could show me a better sql query, I would appreciate it! Thanks

    Read the article

  • WCF with MANY database connections

    - by Jorge Dominguez
    I'm working in the development of an ERP type .Net WinForms application consuming a WCF service. It's to be used by many small companies (in the range of 100-200). Database is SQL Server 2008 and the service will be hosted as a Windows service. Even thought there will be a single DB Server, our customer insists in having separate databases for each company. That is because of stability/support concerns (like DB being damaged or took offline for some reason thus affecting all clients). Concerns coming from previous experiences (not necessarily with same platform). With a single database, connections to the DB would be opened at service start up and pooling used, but, I'm not sure how connections could be managed in a multiple DB scenario: Could a connection to the corresponding DB be opened and closed for each service request? would performance be acceptable? If a connection is opened and maintained for each company accessing the system, what's the practical limit of opened connections (to different databases)? It would be very interesting to hear your opinions and suggestions for this situation. Tanks

    Read the article

  • Efficient update of SQLite table with many records

    - by blackrim
    I am trying to use sqlite (sqlite3) for a project to store hundreds of thousands of records (would like sqlite so users of the program don't have to run a [my]sql server). I have to update hundreds of thousands of records sometimes to enter left right values (they are hierarchical), but have found the standard update table set left_value = 4, right_value = 5 where id = 12340; to be very slow. I have tried surrounding every thousand or so with begin; .... update... update table set left_value = 4, right_value = 5 where id = 12340; update... .... commit; but again, very slow. Odd, because when I populate it with a few hundred thousand (with inserts), it finishes in seconds. I am currently trying to test the speed in python (the slowness is at the command line and python) before I move it to the C++ implementation, but right now this is way to slow and I need to find a new solution unless I am doing something wrong. Thoughts? (would take open source alternative to SQLite that is portable as well)

    Read the article

  • T-SQL: how to sort table rows based on 2 columns

    - by Criss Nautilus
    I'm quite stuck with this problem for sometime now.. How do I sort column A depending on the contents of Column B? I have this sample: ID count columnA ColumnB 12 1 A B 13 2 C D 14 3 B C I want to sort it like this: ID count ColumnA ColumnB 12 1 A B 14 3 B C 13 2 C D so I need to sort the rows if the previous row of ColumnB = the next row of ColumnA I'm thinking a loop? but can't quite imagine how it will work... I was thinking it will go like this (maybe) SELECT a.ID, a.ColumnA, a.ColumnB FROM TableA WITH a (NOLOCK) LEFT JOIN TableA b WITH (NOLOCK) ON a.ID = b.ID and a.counts = b.counts Where a.columnB = b.ColumnA the above code isn't working though and I was thinking more on the lines of... DECLARE @counts int = 1 DECLARE @done int = 0 --WHILE @done = 0 BEGIN SELECT a.ID, a.ColumnA, a.ColumnB FROM TableA WITH a (NOLOCK) LEFT JOIN TableA b WITH (NOLOCK) ON a.ID = b.ID and a.counts = @counts Where a.columnB = b.ColumnA set @count = @count +1 END If this was a C code, would be easier for me but t-sql's syntax is making it a bit harder for a noobie like me.

    Read the article

  • Assigning values to the lable through database depending on listbox values

    - by SurajVitekar
    I want to assign the values of five labels from database regarding to values selected in listbox. The db query returns single column with multiple records. Please help. I'm working with C# 2010 and MS SQL. My current code is: private void listBox1_SelectedIndexChanged(object sender, EventArgs e) { try { String c1, c2; c1 = "NULL"; MessageBox.Show("LB index :"+listBox1.SelectedIndex.ToString()); //p = listBox1.SelectedItem.ToString(); SqlConnection con = new SqlConnection(); con.ConnectionString = "Data Source=localhost;Initial Catalog=eVoting;Integrated Security=True;Pooling=False"; con.Open(); MessageBox.Show("List bOx sect :"+listBox1.SelectedValue.ToString()); SqlCommand cmd = new SqlCommand("select Firstname from candidates where position ='" + listBox1.SelectedValue.ToString() + "'", con); int index = 0; SqlDataReader reader = cmd.ExecuteReader(); while(reader.Read()) { if (index == 0) { c1 = reader[index].ToString(); radioButton1.Text = c1; } if (index == 1) { c1 = reader[index].ToString(); radioButton2.Text = c1; } if (index == 2) { c1 = reader[index].ToString(); radioButton3.Text = c1; } if (index == 3) { c1 = reader[index].ToString(); radioButton4.Text = c1; } if (index == 4) { c1 = reader[index].ToString(); radioButton4.Text = c1; } if (index == 5) { c1 = reader[index].ToString(); radioButton5.Text = c1; } MessageBox.Show("c1 :" + c1); index++; } } catch (Exception E) { } }

    Read the article

  • IN SQL operator in R-Shiny

    - by Piyush
    I am taking multiple selection for component as per below code. selectInput("cmpnt", "Choose Component:", choices = as.character(levels(Material_Data()$CMPNT_NM)),multiple = TRUE) But I am trying to write a sql statement as given below, then its not working. Neither it is throwing any error message. When I was selecting one option at a time (without mutiple = TRUE) then it was working (since I was using "=" operator). But after using "multiple=TRUE" I need to use IN operator, which is not working. Input_Data2 <- fn$sqldf( paste0( "select * from Input_Data1 where MTRL_NBR = '$mtrl1' and CMPNT_NM in ('$cmpnt1')") ) Thanks in advance for any help on this. Thanks jdharrison! Pleasefind the detailed code: # server.R library(RODBC) library(shiny) library(sqldf) Input_Data <- readRDS("InputSource.rds") Mtrl <- factor(Input_Data$MTRL_NBR) Mtrl_List <- levels(Mtrl) shinyServer(function(input, output) { # First UI input (Service column) filter clientData output$Choose_Material <- renderUI({ if (is.null(clientData())) return("No client selected") selectInput("mtrl", "Choose Material:", choices = as.character(levels(clientData()$MTRL_NBR)), selected = input$mtrl ) }) # Second UI input (Rounds column) filter service-filtered clientData output$Choose_Component <- renderUI({ if(is.null(input$mtrl)) return() if (is.null(Material_Data())) return("No service selected") selectInput("cmpnt", "Choose Component:", choices = as.character(levels(Material_Data()$CMPNT_NM)),multiple = TRUE) }) # First data load (client data) clientData <- reactive({ # get(input$Input_Data) return(Input_Data) }) # Second data load (filter by service column) Material_Data <- reactive({ dat <- clientData() if (is.null(dat)) return(NULL) if (!is.null(input$mtrl)) # ! dat <- dat[dat$MTRL_NBR %in% input$mtrl,] dat <- droplevels(dat) return(dat) }) output$Choose_Columns <- renderUI({ if(is.null(input$mtrl)) return() if(is.null(input$cmpnt)) return() colnames <- names(Input_Data) checkboxGroupInput("columns", "Choose Columns To Display The Data:", choices = colnames, selected = colnames) }) output$text <- renderText({ print(input$cmpnt) }) output$data_table <- renderTable({ if(is.null(input$mtrl)) return() if (is.null(input$columns) || !(input$columns %in% names(Input_Data))) return() Input_Data1 <- Input_Data[, input$columns, drop = FALSE] cmpnt1 <- input$cmpnt mtrl1 <- input$mtrl Input_Data2 <- fn$sqldf( paste0( "select * from Input_Data1 where MTRL_NBR = '$mtrl1' and CMPNT_NM in ('$cmpnt1')") ) head(Input_Data2, 10) }) })

    Read the article

  • Using a variable derived from a drop-down list as the column name in a select statement ... Access DB

    - by user1459698
    I'm working with the world's worst DB that was already here so don't blame me for that. So, here's what I have so far ... module = txtModule.value presstype = txtPressType.value SQL_query = "SELECT * FROM tbl_spareparts WHERE '"& module &"' <> '""' AND '"& module &"' = '"& presstype &"' AND Manufacturer = '"& txtsrch.value &"' ORDER BY SAP_Part_No" Set rsData = conn.Execute(SQL_query) This brings up the following SQL statement: SELECT * FROM tbl_spareparts WHERE 'Banyan_Module' <> '"' AND 'Banyan_Module' = 'PB' AND Manufacturer = 'Tester' ORDER BY SAP_Part_No Is there any way I can use the module variable as a column name - obviously the ''s around the column name are causing an error. This is really bothering me. BTW, I'm writing this in VBScript inside a .HTA application page as it has to run locally on tech PCs. Thanks. R.

    Read the article

< Previous Page | 581 582 583 584 585 586 587 588 589 590 591 592  | Next Page >