Search Results

Search found 10424 results on 417 pages for 'persisted column'.

Page 359/417 | < Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >

  • Using jQuery to parse an RSS feed, having trouble in firefox and chrome.

    - by sjmarshy
    I used a jQuery library called jFeed to parse and display my blogs rss feed on my personal website. It worked perfectly well at first, but upon checking later it simply displays nothing, except in Internet Explorer, where it seems to work fine. After checking the javascript console using Firebug in Firefox, it shows an error in the 'XML' tab as follows: XML Parsing Error: no element found Location: moz-nullprincipal:{3f8a0c62-32b4-4f63-b69c- 9ef402b40b64} Line Number 1, Column 1: ^ Though I have no idea what to do with this information. Here is the code I used to get the rss feed and display it (it is almost exactly the same as the example provided by the jFeed website): jQuery.getFeed({ url: 'http://sammarshalldesign.co.uk/blog/wordpress/?feed=rss2', success: function(feed) { var html = ''; for(var i = 0; i < feed.items.length && i < 5; i++) { var item = feed.items[i]; html += '<h3>' + '<a href="' + item.link + '">' + item.title + '</a>' + '</h3>'; html += '<div>' + item.description + '</div>'; }//end for jQuery('#feed').append(html); }//end feed function });//end getfeed Any help would be really appreciated.

    Read the article

  • Retrieving many huge sized EPS files and converting them to JPEG in ASP.NET application

    - by Ashish Gupta
    I have many (600) EPS files(300 KB - 1 MB) in database. In my ASP.NET application (using ASP.NET 4.0) I need to retrieve them one by one and call a web service which would convert the content to the JPEG file and update the database (JPEGContent column with the JPEG content). However, retrieving the content for 600 of them itself takes too long from the SQL management studio itself (takes 5 minutes for 10 EPS contents). So I have two issues:- 1) How to get the EPS content ( unfortunately, selecting certain number of content is not an option :-( ):- Approach 1:- foreach(var DataRow in DataTable.Rows) { // get the Id and byte[] of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or foreach(var DataRow in DataTable.Rows) { // get only the Id of EPS // Hit database to get the content of EPS // Call the web method to convert EPS content to JPEG which would also update the database. } or Any other approach? 2) Converting EPS to JPEG using a web method for 600 contents. Ofcourse, each call would be a long running operation. Would task parellel library (TPL) be a better way to achieve this? Also, is doing the entire thing in a SQL CLR function a good idea?

    Read the article

  • WPF DataGrid binding difficulties

    - by Jasmin Pvvlovic
    This is the class: public class TrainingData { public string Training { get; set; } } And this is the rest of the code in MainWindow: Excel.Workbook xlWorkbook = xlApp.Workbooks.Open("D:/excel.xlsx"); Excel._Worksheet xlWorksheet = xlWorkbook.Sheets[1]; Excel.Range xlRange = xlWorksheet.UsedRange; List <TrainingData> tData= new List <TrainingData>(); int rowCount = xlRange.Rows.Count; int colCount = xlRange.Columns.Count; //int k = 0; for (int i = 1; i <= rowCount; i++) { tData.Add(new TrainingData() { Training = xlRange.Cells[i, 1].Value2.ToString() }); //MessageBox.Show(tData[k].Training); //k++; } Prikaz.ItemsSource = tData; DataGrid: <DataGrid AutoGenerateColumns="False" Height="120" HorizontalAlignment="Left" Margin="12,12,0,0" Name="Prikaz" VerticalAlignment="Top" Width="105" ItemsSource="{Binding}"> <DataGrid.Columns> <DataGridTextColumn Header="Header" /> </DataGrid.Columns> </DataGrid>` So, Prikaz is my DataGrid. tData is List of TrainingData objects. If I uncomment these three lines I can test if I have imported information from excel file correctly, and yes, that works just fine. So why am I getting empty DataGrid? It has right number of rows and only one column, that's ok, but there are no data in it. I used this line: Prikaz.ItemsSource = tData; to bind my objects list and DataGrid. Training is declared public so it should be present in DataGrid. What could be causing the problem?

    Read the article

  • ActiveRecord table inheritence using set_table_names

    - by Jinyoung Kim
    Hi, I'm using ActiveRecord in Ruby on Rails. I have a table named documents(Document class) and I want to have another table data_documents(DataDocument) class which is effectively the same except for having different table name. In other words, I want two tables with the same behavior except for table name. class DataDocument < Document #set_table_name "data_documents" self.table_name = "data_documents" end My solution was to use class inheritance as above, yet this resulted in inconsistent SQL statement for create operation where there are both 'documents' table and 'data_documents' table. Can you figure out why and how I can make it work? >> DataDocument.create(:did=>"dd") ActiveRecord::StatementInvalid: Mysql::Error: Unknown column 'data_documents.did' in 'where clause': SELECT `documents`.id FROM `documents` WHERE (`data_documents`.`did` = BINARY 'dd') LIMIT 1 from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract_adapter.rb:212:in `log' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/mysql_adapter.rb:320:in `execute' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/mysql_adapter.rb:595:in `select' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/database_statements.rb:7:in `select_all_without_query_cache' from /Users/lifidea/.gem/ruby/1.8/gems/activerecord-2.3.2/lib/active_record/connection_adapters/abstract/query_cache.rb:62:in `select_all'

    Read the article

  • Hibernate : Opinions in Composite PK vs Surrogate PK

    - by Albert Kam
    As i understand it, whenever i use @Id and @GeneratedValue on a Long field inside JPA/Hibernate entity, i'm actually using a surrogate key, and i think this is a very nice way to define a primary key considering my not-so-good experiences in using composite primary keys, where : there are more than 1 business-value-columns combination that become a unique PK the composite pk values get duplicated across the table details cannot change the business value inside that composite PK I know hibernate can support both types of PK, but im left wondering by my previous chats with experienced colleagues where they said that composite PK is easier to deal with when doing complex SQL queries and stored procedure processes. They went on saying that when using surrogate keys will complicate things when doing joining and there are several condition when it's impossible to do some stuffs when using surrogate keys. Although im sorry i cant explain the detail here since i was not clear enough when they explain it. Maybe i'll put more details next time. Im currently trying to do a project, and want to try out surrogate keys, since it's not getting duplicated across tables, and we can change the business-column values. And when the need for some business value combination uniqueness, i can use something like : @Table(name="MY_TABLE", uniqueConstraints={ @UniqueConstraint(columnNames={"FIRST_NAME", "LAST_NAME"}) // name + lastName combination must be unique But im still in doubt because of the previous discussion about the composite key. Could you share your experiences in this matter ? Thank you !

    Read the article

  • Avoiding CheckStyle magic number errors in JDBC queries.

    - by Dan
    Hello, I am working on a group project for class and we are trying out CheckStyle. I am fairly comfortable with Java but have never touched JDBC or done any database work before this. I was wondering if there is an elegant way to avoid magic number errors in preparedStatement calls, consider: preparedStatement = connect.prepareStatement("INSERT INTO shows " + "(showid, showtitle, showinfo, genre, youtube)" + "values (default, ?, ?, ?, ?);"); preparedStatement.setString(1, title); preparedStatement.setString(2, info); preparedStatement.setString(3, genre); preparedStatement.setString(4, youtube); result = preparedStatement.executeUpdate(); The setString methods get flagged as magic numbers, so far I just added the numbers 3-10 or so to the ignore list for magic numbers but I was wondering if there was a better way to go about inserting those values into the statement. I also beg you for any other advice that comes to mind seeing that code, I'd like to avoid developing any nasty habits, e.g. should I be using Statement or is PreparedStatement fine? Will that let me refer to column names instead? Is that ideal? etc... Thanks!

    Read the article

  • Why can't I set boolean columns with update?

    - by Benjamin Oakes
    I'm making a user administration page. For the system I'm creating, users need to be approved. Sometimes, there will be many users to approve, so I'd like to make that easy. I'm storing this as a boolean column called approved. I remembered the Edit Multiple Individually Railscast and thought it would be a great fit. However, I'm running into problems which I traced back to ActiveRecord::Base#update. update works fine in this example: >> User.all.map(&:username) => ["ben", "fred"] >> h = {"1"=>{'username'=>'benjamin'}, "2"=>{"username"=>'frederick'}} => {"1"=>{"username"=>"benjamin"}, "2"=>{"username"=>"frederick"}} >> User.update(h.keys, h.values) => ... >> User.all.map(&:username) => ["benjamin", "frederick"] But not this one: >> User.all.map(&:approved) => [true, nil] >> h = {"1"=>{'approved'=>'1'}, "2"=>{'approved'=>'1'}} >> User.update(h.keys, h.values) => ... >> User.all.map(&:approved) => [true, nil] Chaging from '1' to true didn't make a difference when I tested. What am I doing wrong?

    Read the article

  • Can Sql Server 2005 Pivot table have nText passed into it?

    - by manemawanna
    Right bit of a simple question can I input nText into a pivot table? (SQL Server 2005) What I have is a table which records the answers to a questionnaire consisting of the following elements for example: UserID QuestionNumber Answer Mic 1 Yes Mic 2 No Mic 3 Yes Ste 1 Yes Ste 2 No Ste 3 Yes Bob 1 Yes Bob 2 No Bob 3 Yes With the answers being held in nText. Anyway what id like a Pivot table to do is: UserID 1 2 3 Mic Yes No Yes Ste Yes No Yes Bob Yes No Yes I have some test code, that creates a pivot table but at the moment it just shows the number of answers in each column (code can be found below). So I just want to know is it possible to add nText to a pivot table? As when I've tried it brings up errors and someone stated on another site that it wasn't possible, so I would like to check if this is the case or not. Just for further reference I don't have the opportunity to change the database as it's linked to other systems that I haven't created or have access too. Heres the SQL code I have at present below: DECLARE @query NVARCHAR(4000) DECLARE @count INT DECLARE @concatcolumns NVARCHAR(4000) SET @count = 1 SET @concatcolumns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @concatcolumns = (@concatcolumns + ' + ') SET @concatcolumns = (@concatcolumns + 'CAST ([' + CAST(@count AS NVARCHAR) + '] AS NVARCHAR)') SET @count = (@count+1) END DECLARE @columns NVARCHAR(4000) SET @count = 1 SET @columns = '' WHILE (@count <=52) BEGIN IF @COUNT > 1 AND @COUNT <=52 SET @columns = (@columns + ',') SET @columns = (@columns + '[' + CAST(@count AS NVARCHAR) + '] ') SET @count = (@count+1) END SET @query = ' SELECT UserID, ' + @concatcolumns + ' FROM( SELECT UserID, QuestionNumber AS qNum from QuestionnaireAnswers where QuestionnaireID = 7 ) AS t PIVOT ( COUNT (qNum) FOR qNum IN (' + @columns + ') ) AS PivotTable' select @query exec(@query)

    Read the article

  • How to Import XML generated by TFPT into Excel 2007?

    - by keerthivasanp
    Below is the xml content generated by TFPT for the WIQL issued. When I try to import this XML into Excel 2007 XML source pane shows only "Id", "RefName" and "Value" as fields to be mapped. Whereas I would like to display System.Id, System.State, Microsoft.VSTS.Common.ResolvedDate, Microsoft.VSTS.Common.StateChangeDate as column headings and corresponding value as rows. Any idea how to achieve this using XML import in Excel 2007. <?xml version="1.0" encoding="utf-8"?> <WorkItems> <WorkItem Id="40100"> <Field RefName="System.Id" Value="40100" /> <Field RefName="System.State" Value="Closed" /> <Field RefName="Microsoft.VSTS.Common.ResolvedDate" Value="3/17/2010 9:39:04 PM" /> <Field RefName="Microsoft.VSTS.Common.StateChangeDate" Value="4/20/2010 9:15:32 PM" /> </WorkItem> <WorkItem Id="44077"> <Field RefName="System.Id" Value="44077" /> <Field RefName="System.State" Value="Closed" /> <Field RefName="Microsoft.VSTS.Common.ResolvedDate" Value="3/1/2010 4:26:47 PM" /> <Field RefName="Microsoft.VSTS.Common.StateChangeDate" Value="4/20/2010 7:32:12 PM" /> </WorkItem> </WorkItems> Thanks

    Read the article

  • Javascript not able to read data generated by ajax script

    - by user1371033
    I have a situation in which my Jquery Ajax script generates HTML table. And another script is meant to filter the table column by providing a dropdown comprising of unique values in that particular column. If i have static content in html page the filter script works fine. But is not able to read table content when it is generated via Ajax that is during runtime. Any idea what could be the reason. I also tried to align script in order. My Ajax script is here:- $(document).ready(function() { $("#getResults").click(function(){ bug = $("#bug").val(); priority = $("#priority").val(); component = $("#component").val(); fixVersion = $("#fixVersion").val(); dateType = $("#dateType").val(); fromDate = $("#dp2").val(); toDate = $("#dp3").val(); $("#query").empty(); $("tbody").empty(); $.post("getRefineSearchResultsPath", {bug:bug,priority:priority,component:component, fixVersion:fixVersion,dateType:dateType,fromDate:fromDate,toDate:toDate }, function(data) { // setting value for csv report button //clear the value attribute for button first $("#query_csv").removeAttr("value"); //setting new value to "value" attribute of the csv button $("#query_csv").attr("value", function(){ return $(data).find("query").text(); }); $("#query").append("<p class='text-success'>Query<legend></legend><small>" +$(data).find("query").text() +"</small></p>"); var count = 1; $(data).find("issue").each(function(){ var $issue = $(this); var value = "<tr>"; value += "<td>" +count +"</td>"; value += "<td>" +$issue.find('issueKey').text() +"</td>"; value += "<td>" +$issue.find('type').text() +"</td>"; value += "<td><div id='list' class='summary'>" +$issue.find('summary').text() +"</div></td>"; value += "<td><div id='list' class='mousescroll'>" +$issue.find('description').text() +"</div></td>"; value += "<td>" +$issue.find('priority').text() +"</td>"; value += "<td>" +$issue.find('component').text() +"</td>"; value += "<td>" +$issue.find('status').text() +"</td>"; value += "<td>" +$issue.find('fixVersion').text() +"</td>"; value += "<td>" +$issue.find('resolution').text() +"</td>"; value += "<td>" +$issue.find('created').text() +"</td>"; value += "<td>" +$issue.find('updated').text() +"</td>"; value += "<td>" +$issue.find('os').text() +"</td>"; value += "<td>" +$issue.find('frequency').text() +"</td>"; value += "<td>"; var number_of_attachement = $issue.find('attachment').size(); if(number_of_attachement > 0){ value += "<div id='list' class='attachment'>"; value += "<ul class='unstyled'>"; $issue.find('attachment').each(function(){ var $attachment = $(this); value += "<li>"; value += "<a href='#' onclick='document.f1.attachmentName.value='" +$attachment.find('attachmentName').text(); value += "';document.f1.issueKey.value='"+$attachment.find('attachmentissueKey').text(); value += "';document.f1.digest.value='"+$attachment.find('attachmentdigest').text(); value += "';document.f1.submit();'>"+$attachment.find('attachmentName').text(); value += "</a>"; value += "</li>"; value += "<br>"; }); value +="</ul>"; value +="</div>"; } value += "</td>"; value += "</tr>"; $("tbody").append(value); count++; }); }); }); }); And my script to filter table is here, I got this script from this link http://www.javascripttoolbox.com/lib/table/ My JSP page is here:- <html> <body> <table class="table table-bordered table-condensed table-hover example sort01 table-autosort:0 table-autofilter table-autopage:10 table-page-number:t1page table-page-count:t1pages table-filtered-rowcount:t1filtercount table-rowcount:t1allcount"> <thead > <tr> <th class="table-sortable:numeric" Style="width:3%;">No.</th> <th class="table-sortable:default" Style="width:5.5%;">Issue Key <br> </th> <th>Type</th> <th Style="text-align: center;">Summary</th> <th Style="text-align: center;">Description</th> <th class="table-filterable table-sortable:default" id ="priorityColumn" Style="width:5%">Priority</th> <th class="table-filterable table-sortable:default" >Component</th> <th class="table-filterable table-sortable:default" Style="width:5%">Status</th> <th class="table-filterable table-sortable:default">Fix Version</th> <th class="table-filterable table-sortable:default" Style="width:6%">Resolution</th> <th>Created</th> <th>Updated</th> <th>OS</th> <th>Frequency</th> <th>Attachments</th> </tr> </thead> <tbody> </tbody> <tfoot> <tr> <td class="table-page:previous" style="cursor:pointer;"><img src="table/icons/previous.gif" alt="Previous"><small>Prev</small></td> <td colspan="13" style="text-align:center;">Page <span id="t1page"></span>&nbsp;of <span id="t1pages"></span></td> <td class="table-page:next" style="cursor:pointer;">Next <img src="table/icons/next.gif" alt="Previous"></td> </tr> <tr Style="background-color: #dddddd"> <td colspan="15"><span id="t1filtercount"></span>&nbsp;of <span id="t1allcount"></span>&nbsp;rows match filter(s)</td> </tr> <tr class="text-success"> <td colspan="15">Total Results : ${count}</td> </tr> </tfoot> </table> </body> </html>

    Read the article

  • Sharepoint BDC Error: The title property of entity tblStaff is set to an invalid value

    - by Christopher Rathermel
    I am just starting to create our Business Data Catalog(s) for our practice management system and I am running into an issue w/ our staff table. Background: I am using Business Data Catalog Definition Editor to create my ADF. I am using the RevertToSelf Authentication Mode. I have tried a few other tables and they seem to work just fine thus far.. only issue is w/ the staff table. If I removed all the columns for the staff entity except the ID and a few columns for the name it actually works. So it has a problem w/ one of my columns in tblStaff. I receive this error even when I set up an ADF w/ just this one entity. So w/ no associations.. When attempting to view the record: http://servername/ssp/admin/Content/tblstaff.aspx?StaffID={0} w/ {0} replaced w/ an actual staff ID I get the following error: The title property of entity tblStaff is set to an invalid value. Things I have tried: I noticed that I do have a column in my staff table called "Title" and removed it from ADF w/ no luck... Same error.. I tried to use bdc meta man to create my ADF and I got the same error... Any ideas? Chris

    Read the article

  • WPF TexBox TwoWay Binding Problem when ValidationRules used

    - by ignis
    I seem to have a problem with TwoWay DataBinding - my application has a window with a bunch of textboxes that allow to edit values of the properties they are bound to. Everything works well except for textboxes that also have a validation rule defined, in which case no text is displayed in the textbox when the window opens (binding back-to-source still works fine for those). If I remove Validation rule, everything's back to normal. I searched for an answer to this for a few hours now, but somehow did not even find anyone else complaining of the same issue. I am completely new to WPF, and I am sure it is just a silly mistake I have somewhere in my code... I will greatly appreciate any feedback... <TextBox Margin="40,2,20,0" Grid.Column="0" Grid.Row="1" Background="#99FFFFFF" > <Binding Path="LastName" Mode="TwoWay" ValidatesOnDataErrors="true" UpdateSourceTrigger="LostFocus" > <Binding.ValidationRules> <validation:StringNameValidationRule /> </Binding.ValidationRules> </Binding> </TextBox>

    Read the article

  • Blob in Java/Hibernate/sql-server 2005

    - by Ramy
    Hi, I'm trying to insert an HTML blob into our sql-server2005 database. i've been using the data-type [text] for the field the blob will eventually live in. i've also put a '@Lob' annotation on the field in the domain model. The problem comes in when the HTML blob I'm attempting to store is larger than 65536 characters. Its seems that is the caracter-limit for a text data type when using the @Lob annotation. Ideally I'd like to keep the whole blob in tact rather than chunk it up into multiple rows in the database. I appreciate any help or insight that might be provided. Thanks! _Ramy Allow me to clarify annotation: @Lob @Column(length = Integer. MAX_VALUE) //per an answer on stackoverflow private String htmlBlob; database side (sql-server-2005): CREATE TABLE dbo.IndustrySectorTearSheetBlob( ... htmlBlob text NULL ... ) Still seeing truncation after 65536 characters... EDIT: i've printed out the contents of all possible strings (only 10 right now) that would be inserted into the Database. Each string seems to contain all cahracters, judging by the fact that the close html tag is present at the end of the string....

    Read the article

  • Transaction Isolation Level of Serializable not working for me

    - by Shahriar
    I have a website which is used by all branches of a store and what it does is that it records customer purchases into a table called myTransactions.myTransactions table has a column named SerialNumber.For each purchase i create a record in the transactions table and assign a serial to it.The stored procedure that does this calls a UDF function to get a new serialNumber before inserting the record.Like below : Create Procedure mytransaction_Insert as begin insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2,Value3,...., getTransactionNSerialNumber()) end Create function getTransactionNSerialNumber as begin RETURN isnull(SELECT TOP (1) SerialNumber FROM myTransactions READUNCOMMITTED ORDER BY SerialNumber DESC),0) + 1 end The website is being used by so many users in different stores at the same time and it is creating many duplicate serialNumbers(same SerialNumbers).So i added a Sql transaction with ReadCommitted level to the transaction and i still got duplicate transaction numbers.I changed it to SERIALIZABLE in order to lock the resources and i not only got duplicate transaction numbers(!!HOW!!) but i also got sporadic deadlocks between the same stored procedure calls.This is what i tried : (With ommissions of try catch blocks and rollbacks) Create Procedure mytransaction_Insert as begin SET TRANSACTION ISOLATION LEVEL SERIALIZABLE BEGIN TRASNACTION ins insert into myTransactions(column1,column2,column3,...SerialNumber) values( Value1 ,Value2 , Value3, ...., getTransactionNSerialNumber()) COMMIT TRANSACTION ins SET TRANSACTION ISOLATION READCOMMITTED end I even copied the function that gets the serial number directly into the stored procedure instead of the UDF function call and still got duplicate serialNumbers.So,How can a stored procedure line create something Like the c# lock() {} block. By the way,i have to implement the transaction serial number using the same pattern and i can't change the serialNumber to any other identity field or whatever.And for some reasons i need to generate the serialNumber inside the databse and i can't move SerialNumber generation to application level. Thank you.

    Read the article

  • ERROR! (Using Excel's named ranges from C#)

    - by mcoolbeth
    In the following, I am trying to persist a set of objects in an excel worksheet. Each time the function is called to store a value, it should allocate the next cell of the A column to store that object. However, an exception is thrown by the Interop library on the first call to get_Range(). (right after the catch block) Does anyone know what I am doing wrong? private void AddName(string name, object value) { Excel.Worksheet jresheet; try { jresheet = (Excel.Worksheet)_app.ActiveWorkbook.Sheets["jreTemplates"]; } catch { jresheet = (Excel.Worksheet)_app.ActiveWorkbook.Sheets.Add(Type.Missing, Type.Missing, Type.Missing, Type.Missing); jresheet.Visible = Microsoft.Office.Interop.Excel.XlSheetVisibility.xlSheetVeryHidden; jresheet.Name = "jreTemplates"; jresheet.Names.Add("next", "A1", true, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); } Excel.Range cell = jresheet.get_Range("next", Type.Missing); cell.Value2 = value; string address = ((Excel.Name)cell.Name).Name; _app.ActiveWorkbook.Names.Add(name, address, false, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); cell = cell.get_Offset(1, 0); jresheet.Names.Add("next", ((Excel.Name)cell.Name).Name, true, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing, Type.Missing); }

    Read the article

  • question about InnoDB deadlock in MySQL?

    - by WilliamLou
    I found this kind of interesting problem in MySQL InnoDB engine, could anyone explain why the engine always claim it's a deadlock. First, I created a table with a single row, single column: CREATE TABLE `SeqNum` (`current_seq_num` bigint(30) NOT NULL default '0', PRIMARY KEY (`current_seq_num`) ) ENGINE=InnoDB DEFAULT CHARSET=utf8 | Now, I have two MySQL connector threads, In thread1: mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> select `current_seq_num` into @curr_seq FROM SeqNum FOR UPDATE; Query OK, 1 row affected (0.00 sec) Now, in thread2, I did the exactly same: mysql> begin; Query OK, 0 rows affected (0.00 sec) mysql> select `current_seq_num` into @curr_seq FROM SeqNum FOR UPDATE; before the default innodb_lock_wait_timeout, the thread2 just wait for thread1 to release its exclusive lock on the table, and it's normal. However, in thread1, if I input the following update query: mysql> update SeqNum set `current_seq_num` = 8; ERROR 1213 (40001): Deadlock found when trying to get lock; try restarting transaction Now, thread2 get the select query finished because thread1 quits. In addition, in thread1, if I input the update query with a where clause, it can be executed very well: mysql> update SeqNum set `current_seq_num` = 8 where `current_seq_num` =5 Query OK, 1 row affected (0.00 sec) Could anyone explain this?

    Read the article

  • displaying database content in wxHaskell

    - by snorlaks
    Hello, Im using tutorials from wxHaskell and want to display content of table movies in the grid. HEre is my code : {-------------------------------------------------------------------------------- Test Grid. --------------------------------------------------------------------------------} module Main where import Graphics.UI.WX import Graphics.UI.WXCore hiding (Event) import Database.HDBC.Sqlite3 (connectSqlite3) import Database.HDBC main = start gui gui :: IO () gui = do f <- frame [text := "Grid test", visible := False] -- grids g <- gridCtrl f [] gridSetGridLineColour g (colorSystem Color3DFace) gridSetCellHighlightColour g black appendColumns g (movies) -- Here is error: -- Couldn't match expected type [String]' -- against inferred typeIO [[String]]' --appendRows g (map show [1..length (tail movies)]) --mapM_ (setRow g) (zip [0..] (tail movies)) gridAutoSize g -- layout set f [layout := column 5 [fill (dynamic (widget g))] ] focusOn g set f [visible := True] -- reduce flicker at startup. return () where movies = do conn <- connectSqlite3 "Spop.db" r <- quickQuery' conn "SELECT id, title, year, description from Movie where id = 1" [] let myResult = map convRow r return myResult setRow g (row,values) = mapM_ ((col,value) - gridSetCellValue g row col value) (zip [0..] values) {-------------------------------------------------------------------------------- Library?f --------------------------------------------------------------------------------} gridCtrl :: Window a - [Prop (Grid ())] - IO (Grid ()) gridCtrl parent props = feed2 props 0 $ initialWindow $ \id rect - \props flags - do g <- gridCreate parent id rect flags gridCreateGrid g 0 0 0 set g props return g appendColumns :: Grid a - [String] - IO () appendColumns g [] = return () appendColumns g labels = do n <- gridGetNumberCols g gridAppendCols g (length labels) True mapM_ ((i,label) - gridSetColLabelValue g i label) (zip [n..] labels) appendRows :: Grid a - [String] - IO () appendRows g [] = return () appendRows g labels = do n <- gridGetNumberRows g gridAppendRows g (length labels) True mapM_ ((i,label) - gridSetRowLabelValue g i label) (zip [n..] labels) convRow :: [SqlValue] - [String] convRow [sqlId, sqlTitle, sqlYear, sqlDescription] = [intid, title, year, description] where intid = (fromSql sqlId)::String title = (fromSql sqlTitle)::String year = (fromSql sqlYear)::String description = (fromSql sqlDescription)::String What should I do to get rif of error in code above (24th line)

    Read the article

  • How to make comment reply query in MYSQL?

    - by Prashant
    I am having comment reply (only till one level) functionality. All comments can have as many as replies but no replies can have their further replies. So my database table structure is like below Id ParentId Comment 1 0 this is come sample comment text 2 0 this is come sample comment text 3 0 this is come sample comment text 4 1 this is come sample comment text 5 0 this is come sample comment text 6 3 this is come sample comment text 7 1 this is come sample comment text In the above structures, commentid, 1 (has 2 replies) and 3 (1 reply) has replies. So to fetch the comments and their replies, one simple method is first I fetch all the comments having ParentId as 0 and then by running a while loop fetch all the replies of that particular commentId. But that seems to be running hundreds of queries if I'll have around 200 comments on a particular record. So I want to make a query which will fetch Comments with their replies sequentially as following; Id ParentId Comment 1 0 this is come sample comment text 4 1 this is come sample comment text 7 1 this is come sample comment text 2 0 this is come sample comment text 3 0 this is come sample comment text 6 3 this is come sample comment text 5 0 this is come sample comment text I also have a comment date column in my comment table, if anyone wants to use this with comment query. So finally I want to fetch all the comments and their replies by using one single mysql query. Please tell me how I can do that? Thanks

    Read the article

  • R: convert data.frame columns from factors to characters

    - by Mike Dewar
    Hi, I have a data frame. Let's call him bob: > head(bob) phenotype exclusion GSM399350 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399351 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399352 3- 4- 8- 25- 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399353 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399354 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- GSM399355 3- 4- 8- 25+ 44+ 11b- 11c- 19- NK1.1- Gr1- TER119- I'd like to concatenate the rows of this data frame (this will be another question). But look: > class(bob$phenotype) [1] "factor" Bob's columns are factors. So, for example: > as.character(head(bob)) [1] "c(3, 3, 3, 6, 6, 6)" "c(3, 3, 3, 3, 3, 3)" [3] "c(29, 29, 29, 30, 30, 30)" I don't begin to understand this, but I guess these are indices into the levels of the factors of the columns (of the court of king caractacus) of bob? Not what I need. Strangely I can go through the columns of bob by hand, and do bob$phenotype <- as.character(bob$phenotype) which works fine. And, after some typing, I can get a data.frame whose columns are characters rather than factors. So my question is: how can I do this automatically? How do I convert a data.frame with factor columns into a data.frame with character columns without having to manually go through each column? Bonus question: why does the manual approach work?

    Read the article

  • What am I doing wrong with HttpResponse content and headers when downloading a file?

    - by Slauma
    I want to download a PDF file from a SQL Server database which is stored in a binary column. There is a LinkButton on an aspx page. The event handler of this button looks like this: protected void LinkButtonDownload(object sender, EventArgs e) { ... byte[] aByteArray; // Read binary data from database into this ByteArray // aByteArray has the size: 55406 byte Response.ClearHeaders(); Response.ClearContent(); Response.BufferOutput = true; Response.AddHeader("Content-Disposition", "attachment; filename=" + "12345.pdf"); Response.ContentType = "application/pdf"; using (BinaryWriter aWriter = new BinaryWriter(Response.OutputStream)) { aWriter.Write(aByteArray, 0, aByteArray.Length); } } A "File Open/Save dialog" is offered in my browser. When I store this file "12345.pdf" to disk, the file has a size of 71523 Byte. The additional 16kB at the end of the PDF file are the HTML code of my page (as I can see when I view the file in an editor). I am confused because I was believing that ClearContent and ClearHeaders would ensure that the page content is not sent together with the file content. What am I doing wrong here? Thanks for help!

    Read the article

  • problem with ajax on the hosting server

    - by nelly
    when I Implemented chatting Function , I use Ajax to send messages between file to another . so , it is working well on local host . but , when I upload it in to remote server it doesn't work. can U tell me ,why ? is an Ajax need Special configuration ? there is my files that I used : Ajax .js file witch has "ajax_send" function that i used in chatbox.js file chatbox.js file wich consest of functions i used it to send data from php file to another one and it display the state (any user sign in or new sending message and so on ..) user.php file whitch responseble to write user name in the text file usersonline.txt and then display the online users in the online users column. send.php file that write on room1.text recive.php file that read room1.txt and then write the content into the chat box I beleve that the problem comes from the ajax code in Ajax.js File so please help me find out the problem and how to solve it. is ajax needs special server settings ? because it was working at the local host !

    Read the article

  • Parsing tab delimited file with double quotes in Perl

    - by sfactor
    I have a data set that is tab delimited with the user-agent strings in double quotes. I need to parse each of these columns and based on the answer of my other post I used the Text::CSV module. 94410634 0 GET "Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 5.1; Trident/4.0; GTB6.6; .NET CLR 2.0.50727; .NET CLR 3.0.04506.648; .NET CLR 3.5.21022; .NET CLR 3.0.4506.2152; .NET CLR 3.5.30729; AskTB5.5)" 1 The code is a simple one. #!/usr/bin/perl use strict; use warnings; use Text::CSV; my $csv = Text::CSV->new(sep_char => "\t"); while (<>) { if ($csv->parse($_)) { my @columns = $csv->fields(); print "@columns\n"; } else { my $err = $csv->error_input; print "Failed to parse line: $err"; } } But i get the Failed to parse line: error when I try it on this dataset. what am I doing wrong? I need to extract the 4th column containing the user-agent strings for further processing.

    Read the article

  • Using multiple aggregate functions in an (ANSI) SQL statement

    - by morpheous
    I have aggregate functions foo(), foobar(), fredstats(), barneystats() I want to create a domain specific query language (DSQL) above my DB, to facilitate using a domain language to query the DB. The 'language' comprises of algebraic expressions (or more specifically SQL like criteria) which I use to generate (ANSI) SQL statements which are sent to the db engine. The following lines are examples of what the language statements will look like, and hopefully, it will help further clarify the concept: **Example 1** DQL statement: foobar('yellow') between 1 and 3 and fredstats('weight') > 42 Translation: fetch all rows in an underlying table where computed values for aggregate function foobar() is between 1 and 3 AND computed value for AGG FUNC fredstats() is greater than 42 **Example 2** DQL statement: fredstats('weight') < barneystats('weight') AND foo('fighter') in (9,10,11) AND foobar('green') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 3** DQL statement: foobar('green') / foobar('red') <> 42 Translation: Fetch all rows where the specified criteria matches **Example 4** DQL statement: foobar('green') - foobar('red') >= 42 Translation: Fetch all rows where the specified criteria matches Given the following information: The table upon which the queries above are being executed is called 'tbl' table 'tbl' has the following structure (id int, name varchar(32), weight float) The result set returns only the tbl.id, tbl.name and the names of the aggregate functions as columns in the result set - so for example the foobar() AGG FUNC column will be called foobar in the result set. So for example, the first DQL query will return a result set with the following columns: id, name, foobar, fredstats Given the above, my questions then are: What would be the underlying SQL required for Example1 ? What would be the underlying SQL required for Example3 ? Given an algebraic equation comprising of AGGREGATE functions, Is there a way of generalizing the algorithm needed to generate the required ANSI SQL statement(s)? I am using PostgreSQL as the db, but I would prefer to use ANSI SQL wherever possible.

    Read the article

  • SQL View with Data from two tables

    - by Alex
    Hello! I can't seem to crack this - I have two tables (Persons and Companies), and I'm trying to create a view that: 1) shows all persons 2) also returns companies by themselves once, regardless of how many persons are related to it 3) orders by name across both tables To clarify, some sample data: (Table: Companies) Id Name 1 Banana 2 ABC Inc. 3 Microsoft 4 Bigwig (Table: Persons) Id Name RelatedCompanyId 1 Joe Smith 3 2 Justin 3 Paul Rudd 4 4 Anjolie 5 Dustin 4 The output I'm looking for is something like this: Name PersonName CompanyName RelatedCompanyId ABC Inc. NULL ABC Inc. NULL Anjolie Anjolie NULL NULL Banana NULL Banana NULL Bigwig NULL Bigwig NULL Dustin Dustin Bigwig 4 Joe Smith Joe Smith Microsoft 3 Justin Justin NULL NULL Microsoft NULL Microsoft NULL Paul Rudd Paul Rudd Bigwig 4 As you can see, the new "Name" column is ordered across both tables (the company names appear correctly in between the person names), and each company appears exactly once, regardless of how many people are related to it. Can this even be done in SQL?! P.S. I'm trying to create a view so I can use this later for easy data retrieval, fulltext indexing and make the programming side simpler by just querying the view.

    Read the article

  • sqlite3 JOIN, GROUP_CONCAT using distinct with custom separator

    - by aiwilliams
    Given a table of "events" where each event may be associated with zero or more "speakers" and zero or more "terms", those records associated with the events through join tables, I need to produce a table of all events with a column in each row which represents the list of "speaker_names" and "term_names" associated with each event. However, when I run my query, I have duplication in the speaker_names and term_names values, since the join tables produce a row per association for each of the speakers and terms of the events: 1|Soccer|Bobby|Ball 2|Baseball|Bobby - Bobby - Bobby|Ball - Bat - Helmets 3|Football|Bobby - Jane - Bobby - Jane|Ball - Ball - Helmets - Helmets The group_concat aggregate function has the ability to use 'distinct', which removes the duplication, though sadly it does not support that alongside the custom separator, which I really need. I am left with these results: 1|Soccer|Bobby|Ball 2|Baseball|Bobby|Ball,Bat,Helmets 3|Football|Bobby,Jane|Ball,Helmets My question is this: Is there a way I can form the query or change the data structures in order to get my desired results? Keep in mind this is a sqlite3 query I need, and I cannot add custom C aggregate functions, as this is for an Android deployment. I have created a gist which makes it easy for you to test a possible solution: https://gist.github.com/4072840

    Read the article

< Previous Page | 355 356 357 358 359 360 361 362 363 364 365 366  | Next Page >