Search Results

Search found 22078 results on 884 pages for 'composite primary key'.

Page 749/884 | < Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >

  • Audio/video streaming on Windows platform

    - by bushtucker
    I'm building an interactive language learning application to be used in a classroom environment. The idea is that a teacher should be able to talk to the students (=audio stream to all students), let students talk to each other (= audio P2P) in groups of two or more, let students watch a video coming from a the DVD player or coming from a media server. It should be possible to save the audio/video streams. The teacher should also be able to monitor, take-over or block the desktop of the students. The platform is Windows and it's a desktop application, no web application. The audio delay should be as minimal as poosible. Optionally a student sitting at home should be supported, but it's not a high priority. I am now finished with the classroom control part of the application (login, monitor, block, ...) and want to start the audio and video part. I've been evaluating several options like DirectX, GStreamer and SIP but now I have to make a decision. DirectX seems an obvious choice for the Windows platform, but it only lets me capture and playback audio and video. The encoding/decoding/network part I should do myself. GStreamer contains all kinds of options to capture/encode/stream/save audio and video streams. I've experimented a bit with it (ossbuild) and it does seem to involve a lot of trial and error to make something work: - microphone capture (via directsoundsrc) produces cracking noises on some computers - rtpL16 payloader didn't work well - streaming raw audio over the network only working at a sampling rate of 8000, no higher - there are a lot of errors when receiving mpeg4 video (bad I-frame), on some computers worse than others It is my impression that gstreamer is primary targetted at linux platforms. Development and support for the Windows platform seems to be a little behind. Nevertheless it's a powerful framework that could save me months and years of work. SIP seems to be able to do everything I want, but it is targeted towards telephony and IM. I don't know how flexible SIP is. It seems to me that the SIP layer would just be overhead as I already have a central (teacher) application that can control and setup all the streams. The interesting parts of frameworks like opalvoip and freeswitch are the actual audio/video capture, the encoding and transmission. Does anyone know how these interesting parts relate a framework like gstreamer? Are they easy to integrate into a custom application? Are they flexible enough? Does anyone have experience with all or one of these technologies? Maybe there are even other options I can look at? Many thanks for your advice

    Read the article

  • many-to-many performance concerns with fluent nhibernate.

    - by Ciel
    I have a situation where I have several many-to-many associations. In the upwards of 12 to 15. Reading around I've seen that it's generally believed that many-to-many associations are not 'typical', yet they are the only way I have been able to create the associations appropriate for my case, so I'm not sure how to optimize any further. Here is my basic scenario. class Page { IList<Tag> Tags { get; set; } IList<Modification> Modifications { get; set; } IList<Aspect> Aspects { get; set; } } This is one of my 'core' classes, and coincidentally one of my core tables. Virtually half of the objects in my code can have an IList<Page>, and some of them have IList<T> where T has its own IList<Page>. As you can see, from an object oriented standpoint, this is not really a problem. But from a database standpoint this begins to introduce a lot of junction tables. So far it has worked fine for me, but I am wondering if anyone has any ideas on how I could improve on this structure. I've spent a long time thinking and in order to achieve the appropriate level of association required, I cannot think of any way to improve it. The only thing I have come up with is to make intermediate classes for each object that has an IList<Page>, but that doesn't really do anything that the HasManyToMany does not already do except introduce another class. It does not extend the functionality and, from what I can tell, it does not improve performance. Any thoughts? I am also concerned about Primary Key limits in this scenario. Most everything needs to be able to have these properties, but the Pages cannot be unique to each object, because they are going to be frequently shared and joined between multiple objects. All relationships are one-sided. (That is, a Page has no knowledge of what owns it). Because of this, I also have no Inverse() mapped HasManyToMany collections. Also, I have read the similar question : Usage of ORMs like NHibernate when there are many associations - performance concerns But it really did not answer my concerns.

    Read the article

  • JSON or YAML encoding in GWT/Java on both client and server

    - by KennethJ
    I'm looking for a super simple JSON or YAML library (not particularly bothered which one) written in Java, and can be used in both GWT on the client, and in its original Java form on the server. What I'm trying to do is this: I have my models, which are shared between the client and the server, and these are the primary source of data interchange. I want to design the web service in between to be as simple as possible, and decided to take the RESTful approach. My problem is that I know our application will grow substantially in the future, and writing all the getters, setters, serialization, factories, etc. by hand fills me with absolute dread. So in order to avoid it, I decided to implement annotations to keep track of attributes on the models. The reason I can't just serialize everything directly, using GWT's own one, or one which works through reflection, is because we need a certain amount of logic going on in the serialization process. I.e. whether references to other models get serialized during the serialization of the original model, or whether an ID is just passed, and general simple things like that. I've then written an annotation processor to preprocess my shared models and generate an implementing class with all the getters, setters, serialization, lazy-loading, etc. To make a long story short, I need some type of simple YAML or JSON library, which allows me to encode and decode manually, so I can generate this code through my annotation processor. I have had a look around the interwebs, but every single one I ran into supported some reflection which, while all fine and dandy, make it pretty much useless for GWT. And in the case of GWT's own JSON library, it uses JSNI for speed purposes, making it useless server side. One solution I did think about involved writing writing two sets of serialization methods on the models, one for the client and one for the server, but I'd rather not do that. Also, I'm pretty new to GWT, and even though I have done a lot of Java, it was back in the 1.2 days, so it's a bit rusty. So if you think I'm going about this problem completely the wrong way, I'm open to suggestions.

    Read the article

  • How to fine tune FluentNHibernate's auto mapper?

    - by Venemo
    Okay, so yesterday I managed to get the latest trunk builds of NHibernate and FluentNHibernate to work with my latest little project. (I'm working on a bug tracking application.) I created a nice data access layer using the Repository pattern. I decided that my entities are nothing special, and also that with the current maturity of ORMs, I don't want to hand-craft the database. So, I chose to use FluentNHibernate's auto mapping feature with NHibernate's "hbm2ddl.auto" property set to "create". It really works like a charm. I put the NHibernate configuration in my app domain's config file, set it up, and started playing with it. (For the time being, I created some unit tests only.) It created all tables in the database, and everything I need for it. It even mapped my many-to-many relationships correctly. However, there are a few small glitches: All of the columns created in the DB allow null. I understand that it can't predict which properties should allow null and which shouldn't, but at least I'd like to tell it that it should allow null only for those types for which null makes sense in .NET (eg. non-nullable value types shouldn't allow null). All of the nvarchar and varbinary columns it created, have a default length of 255. I would prefer to have them on max instead of that. Is there a way to tell the auto mapper about the two simple rules above? If the answer is no, will it work correctly if I modify the tables it created? (So, if I set some columns not to allow null, and change the allowed length for some other, will it correctly work with them?) EDIT: I managed to achieve the above by using Fluent NHibernate's convention API. Thanks to everyone who helped! However, there is one more thing: after checking out the convention API, I really would like my IDs to be calld "ID", not "Id", but it seems to me that the PrimaryKey.Name.Is(x => "ID") is not working at all. If I add it to the conventions collection and rewrite my entities' properties to "ID" instead of "Id", it throws an exception that there is no primary key mapped. Any thoughts on this?

    Read the article

  • Large number of UPDATE queries slowing down page

    - by Bryan Lewis
    I am reading and validating large fixed-width text files (range from 10-50K lines) that are submitted via our ASP.net website (coded in VB.Net). I do an initial scan of the file to check for basic issues (line length, etc). Then I import each row into a MS SQL table. Each DB rows basically consists of a record_ID (Primary, auto-incrementing) and about 50 varchar fields. After the insert is done, I run a validation function on the file that checks each field in each row based on a bunch of criteria (trimmed length, isnumeric, range checks, etc). If it finds an error in any field, it inserts a record into the Errors table, which has an error_ID, the record_ID and an error message. In addition, if the field fails in a particular way, I have to do a "reset" on that field. A reset might consist of blanking the entire field, or simply replacing the value with another value (e.g. replacing the string with a new one that has all illegals chars taken out). I have a 5,000 line test file. The upload, initial check, and import takes about 5-6 seconds. The detailed error check and insert into the Errors table takes about 5-8 seconds (this file has about 1200 errors in it). However, the "resets" part takes about 40-45 seconds for 750 fields that need to be reset. When I comment out the resets function (returning immediately without actually calling the UPDATE stored proc), the process is very fast. With the resets turned on, the pages take 50 seconds to return. My UPDATE stored proc is using some recommended code from http://sommarskog.se/dynamic_sql.html, whereby it uses CASE instead of dynamic SQL: UPDATE dbo.Records SET dbo.Records.file_ID = CASE @field_name WHEN 'file_ID' THEN @field_value ELSE file_ID END, . . (all 50 varchar field CASE statements here) . WHERE dbo.Records.record_ID = @record_ID Is there any way I can help my performance here. Can I somehow group all of these UPDATE calls into a single transaction? Should I be reworking the UPDATE query somehow? Or is it just sheer quantity of 750+ UPDATEs and things are just slow (it's a quad proc server with 8GB ram). Any suggestions appreciated.

    Read the article

  • how to create a system-wide independent universal counter object primarily for Database keys?

    - by andora
    I would like to create/use a system-wide independent universal 'counter object' that can be called via COM in a thread-safe manner. The counter object will be passed an ID to identify which counter to return, handle the counting, 'persist' the count (occasionally), have reasonable performance (as fast as possible) perhaps capable of 1000 counts per second or better (1mS) and be accessible cross-process/out-of-process. The current count status must be persisted between object restarts/shutdowns. The counter object is liklely to be a 'singleton' type object implemented in some form of free-threaded dictionary, containing maybe 10 counters (perhaps 50 max). The count needs to be monotonic and consistent, (ie: guaranteed unique sequential values). Each counter should have a few methods, like reset, inc, dec, set, clear, remove. As a luxury, I would like to have a variable-increment (ie: 'step by' value). To support thread-safefty, perhaps some sorm of critical-section or mutex call. It just needs to return a long/4byte signed integer. I really want something that can be called from anywhere, including VBScript, so I figure COM is my preferred solution. The primary use of this is for database keys. I am unable to use autoinc or guid type keys and have ruled out database-generated counting systems at this point. I've spent days researching this and I have really struggled to find a solution. The best I can find is a free-threaded dictionary object that can be instantiated using COM+ from Motobit - it seems to offer all the 'basics' and I guess I could create some form of wrapper for this. So, here are my questions: Does such a 'general purpose counter-object already exist? Can you direct me to it? (MS did do an IIS/ASP object called 'MSWC.Counter' but this isn't 'cross-process'/ out-of-process component and isn't thread-safe. (but if it was, it would do!) What is the best way of creating such a Component? (I'd prefer VB6 right-now, [don't ask!] but can do in VB.NET2005 if I had to). I don't have the skills/knowledge/tools to use anything else. I am desparate for a workable solution. I need specific guidance! If anybody can code something up for me I am prepared to pay for it.

    Read the article

  • A better way to delete a list of elements from multiple tables

    - by manyxcxi
    I know this looks like a 'please write the code' request, but some basic pointer/principles for doing this the right way should be enough to get me going. I have the following stored procedure: CREATE PROCEDURE `TAA`.`runClean` (IN idlist varchar(1000)) BEGIN DECLARE EXIT HANDLER FOR NOT FOUND ROLLBACK; DECLARE EXIT HANDLER FOR SQLEXCEPTION ROLLBACK; DECLARE EXIT HANDLER FOR SQLWARNING ROLLBACK; START TRANSACTION; DELETE FROM RunningReports WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_INDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_INVOICE WHERE run_id IN (idlist); DELETE FROM TMD_OUTDATA_LINE WHERE run_id IN (idlist); DELETE FROM TMD_TEST WHERE run_id IN (idlist); DELETE FROM RunHistory WHERE id IN (idlist); COMMIT; END $$ It is called by a PHP script to clean out old run history. It is not particularly efficient as you can see and I would like to speed it up. The PHP script gathers the ids to remove from the tables with the following query: $query = "SELECT id, stop_time FROM RunHistory WHERE config_id = $configId AND save = 0 AND NOT(stop_time IS NULL) ORDER BY stop_time"; It keeps the last five run entries and deletes all the rest. So using this query to bring back all the IDs, it determines which ones to delete and keeps the 'newest' five. After gathering the IDs it sends them to the stored procedure to remove them from the associated tables. I'm not very good with SQL, but I ASSUME that using an IN statement and not joining these tables together is probably the least efficient way I can do this, but I don't know enough to ask anything but "how do I do this better?" If possible, I would like to do this all in my stored procedure using a query to gather all the IDs except for the five 'newest', then delete them. Another twist, run entries can be marked save (save = 1) and should not be deleted. The RunHistory table looks like this: CREATE TABLE `TAA`.`RunHistory` ( `id` int(11) NOT NULL auto_increment, `start_time` datetime default NULL, `stop_time` datetime default NULL, `config_id` int(11) NOT NULL, [...] `save` tinyint(1) NOT NULL default '0', PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=0 DEFAULT CHARSET=utf8;

    Read the article

  • using CSS to center FLOATED input elements wrapped in a DIV

    - by Tim
    There's no shortage of questions and answers about centering but I've not been able to get it to work given my specific circumstances, which involve floating. I want to center a container DIV that contains three floated input elements (split-button, text, checkbox), so that when my page is resized wider, they go from this: ||.....[ ][v] [ ] [ ] label .....|| to this ||......................[ ][v] [ ] [ ] label.......................|| They float fine, but when the page is made wider, they stay to the left: ||.....[ ][v] [ ] [ ] label .......................................|| If I remove the float so that the input elements are stacked rather than side-by-side: [ ][v] [ ] [ ] label then they DO center correctly when the page is resized. SO it is the float being applied to the elements of the DIV#hbox inside the container that is messing up the centering. Is what I want to do impossible because of the way float is designed to work? Here is my DOCTYPE, and the markup does validate at w3c: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01//EN" "http://www.w3.org/TR/html4/strict.dtd"> Here is my markup: <div id="term1-container"> <div class="hbox"> <div> <button id="operator1" class="operator-split-button">equals</button> <button id="operator1drop">show all operators</button> </div> <div><input type="text" id="term1"></input></div> <div><input type="checkbox" id="meta2"></input><label for="meta2" class="tinylabel">meta</label></div> </div> </div> And here's the (not-working) CSS: #term1-container {text-align: center} .hbox {margin: 0 auto;} .hbox div {float:left; } I have also tried applying display: inline-block to the floated button, text-input, and checkbox; and even though I think it applies only to text, I've also tried applying white-space: nowrap to the #term1-container DIV, based on posts I've seen here on SO. And just to be a little more complete, here's the jQuery that creates the split-button: $(".operator-split-button").button().click( function() { alert( "foo" ); }).next().button( { text: false, icons: { primary: "ui-icon-triangle-1-s" } }).click( function(){positionOperatorsMenu();} ) })

    Read the article

  • Gotchas INSERTing into SQLite on Android?

    - by paul.meier
    Hi friends, I'm trying to set up a simple SQLite database in Android, handling the schema via a subclass of SQLiteOpenHelper. However, when I query my tables, the columns I think I've inserted are never present. Namely, in SQLiteOpenHelper's onCreate(SQLiteDatabase db) method, I use db.execSQL() to run CREATE TABLE commands, then have tried both db.execSQL and db.insert() to run INSERT commands on the tables I've just created. This appears to run fine, but when I try to query them I always get 0 rows returned (for debugging, the queries I'm running are simple SELECT * FROM table and checking the Cursor's getCount()). Anybody run into anything like this before? These commands seem to run on command-line sqlite3. Are they're gotchas that I'm missing (e.g. INSERTS must/must not be semicolon terminated, or some issue involving multiple tables)? I've attached some of the code below. Thanks for your time, and let me know if I can clarify further. @Override public void onCreate(SQLiteDatabase db) { db.execSQL("CREATE TABLE "+ LEVEL_TABLE +" (" + " "+ _ID +" INTEGER PRIMARY KEY AUTOINCREMENT," + " level TEXT NOT NULL,"+ " rows INTEGER NOT NULL,"+ " cols INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ DYNAMICS_TABLE +" (" + " level_id INTEGER NOT NULL," + " row INTEGER NOT NULL,"+ " col INTEGER NOT NULL,"+ " type INTEGER NOT NULL);"); db.execSQL("CREATE TABLE "+ SCORE_TABLE +" (" + " level_id INTEGER NOT NULL," + " score INTEGER NOT NULL,"+ " date_achieved DATE NOT NULL,"+ " name TEXT NOT NULL);"); this.enterFirstLevel(db); } And a sample of the insert code I'm currently using, which gets called in enterFirstLevel() (some values hard-coded just to get it running...): private void insertDynamic(SQLiteDatabase db, int row, int col, int type) { ContentValues values = new ContentValues(); values.put("level_id", "1"); values.put("row", Integer.toString(row)); values.put("col", Integer.toString(col)); values.put("type", Integer.toString(type)); db.insertOrThrow(DYNAMICS_TABLE, "col", values); } Finally, query code looks like this: private Cursor fetchLevelDynamics(int id) { SQLiteDatabase db = this.leveldata.getReadableDatabase(); try { String fetchQuery = "SELECT * FROM " + DYNAMICS_TABLE; String[] queryArgs = new String[0]; Cursor cursor = db.rawQuery(fetchQuery, queryArgs); Activity activity = (Activity) this.context; activity.startManagingCursor(cursor); return cursor; } finally { db.close(); } }

    Read the article

  • How to properly preload images, js and css files?

    - by Kenny Bones
    Hi, I'm creating a website from scratch and I was really into this in the late 90's but the web has changed alot since then! And I'm more of a designer so when I started putting this site together, I basically did a system of php includes to make the site more "dynamic" When you first visit the site, you'll be presented to a logon screen, if you're not already logged on (cookies). If you're not logged on, a page called access.php is introdused. I thought I'd preload the most heavy images at this point. So that when the user is done logging on, the images are already cached. And this is working as I want. But I still notice that the biggest image still isn't rendered immediatly anyway. So it's seems kinda pointless. All of this has made me rethink how the site is structured and how scripts and css files are loaded. Using FireBug and YSlow with Firefox I see a few pointers like expires headers and reducing the size of each script. But is this really the culprit? For example, would this be really really stupid in the main index.php? The entire site is basically structured like this <?php require("dbconnect.php"); ?> <?php include ("head.php"); ?> And below this is basically just the body and the content of the site. Head.php however consists of the doctype, head portions, linking of two css style sheets, jQuery library, jQuery validation engine, Cufon and Cufon font file, and then the small Cufon.Replace snippet. The rest of the body comes with the index.php file, but at the bottom of this again is an include of a file called "footer.php" which basically consists of loading of a couple of jsLoader scripts and a slidepanel and then a js function. All of this makes the end page source look like a typical complete webpage, but I'm wondering if any of you can see immediatly that "this is really really stupid" and "don't do that, do this instead" etc. :) Are includes a bad way to go? This site is also pretty image intensive and I can probably do a little more optimization. But I don't think that's its the primary culprit. YSlow gives me a report of what takes up the most space: doc(1) - 5.8K js(5) - 198.7K css(2) - 5.6K cssimage(8) - 634.7K image(6) - 110.8K I know it looks like it's cssimage(8) that weighs the most, but I've already preloaded these images from before and it doesn't really affect the rendering.

    Read the article

  • What makes great software?

    - by VirtuosiMedia
    From the perspective of an end user, what makes a software great rather than just good or functional? What are some fundamental principles that can shift the way a software is used and perceived? What are some of the little finishing touches that help put an application over the top? I'm in the later stages of developing a web app and I'm looking for ideas or concepts that I may have missed. If you have specific examples of software or apps that you absolutely love, please share the reasons or features that make it special. Keep in mind that I'm looking for examples that directly affect the end user, but not necessarily just UI suggestions. Here are some of the principles and little touches I'm trying to use: Keep the UI as simple as possible. Remove absolutely everything that isn't necessary. Use progressive disclosure when more information can be needed sometimes but isn't needed all the time. Provide inline help and useful error messages. Verbs on buttons wherever possible. Make anything that's clickable obvious. Fast, responsive UI. Accessibility (this is a work in progress). Reusable UI patterns. Once a user learns a skill, they will be able to use it in multiple places. Intelligent default settings. Auto-focusing forms when filling out the form is the primary action to be taken on the page. Clear metaphors (like tabs) and headings indicating location within the app. Automating repetitive tasks (with the ability to disable the automation). Use standardized or accepted metaphors for icons (like an "x" for delete). Larger text sizes for improved readability. High contrast so that each section is distinct. Making sure that it's obvious on every page what the user is supposed to do by establishing a clear information hierarchy and drawing the eye to the call to action. Most deletions can be undone. Discoverability - Make it easy to learn how to do new tasks. Group similar elements together.

    Read the article

  • Methodology for a Rails app

    - by Aaron Vegh
    I'm undertaking a rather large conversion from a legacy database-driven Windows app to a Rails app. Because of the large number of forms and database tables involved, I want to make sure I've got the right methodology before getting too far. My chief concern is minimizing the amount of code I have to write. There are many models that interact together, and I want to make sure I'm using them correctly. Here's a simplified set of models: class Patient < ActiveRecord::Base has_many :PatientAddresses has_many :PatientFileStatuses end class PatientAddress < ActiveRecord::Base belongs_to :Patient end class PatientFileStatus < ActiveRecord::Base belongs_to :Patient end The controller determines if there's a Patient selected; everything else is based on that. In the view, I will be needing data from each of these models. But it seems like I have to write an instance variable in my controller for every attribute that I want to use. So I start writing code like this: @patient = Patient.find(session[:patient]) @patient_addresses = @patient.PatientAddresses @patient_file_statuses = @patient.PatientFileStatuses @enrollment_received_when = @patient_file_statuses[0].EnrollmentReceivedWhen @consent_received = @patient_file_statuses[0].ConsentReceived @consent_received_when = @patient_file_statuses[0].ConsentReceivedWhen The first three lines grab the Patient model and its relations. The next three lines are examples of my providing values to the view from one of those relations. The view has a combination of text fields and select fields to show the data above. For example: <%= select("patientfilestatus", "ConsentReceived", {"val1"="val1", "val2"="val2", "Written"="Written"}, :include_blank=true )% <%= calendar_date_select_tag "patient_file_statuses[EnrollmentReceivedWhen]", @enrollment_complete_when, :popup=:force % (BTW, the select tag isn't really working; I think I have to use collection_select?) My questions are: Do I have to manually declare the value of every instance variable in the controller, or can/should I do it within the view? What is the proper technique for displaying a select tag for data that's not the primary model? When I go to save changes to this form, will I have to manually pick out the attributes for each model and save them individually? Or is there a way to name the fields such that ActiveRecord does the right thing? Thanks in advance, Aaron.

    Read the article

  • How do I write sql data into a textbox after a submit type event

    - by Matt
    Finishing up some homework and Im having trouble with figuring out how to take information generated in sql column(a primary key set up to assign a record number to a customer example 1046) at submit and writing it to my redirected recipt page. I call it recipt.aspx. Any takers Professor says to use a datareader...but things go bad after that. public partial class _Default : System.Web.UI.Page { String cnStr = "EDITED FOR THE PURPOSE OF NOT DISPLAYED SQL SERVERta Source=111.11.111.11; uid=xxxxxxx; password=xxxx; database=xxxxxx; "; String insertStr; SqlDataReader reader; SqlConnection myConnection = new SqlConnection(); protected void submitbutton_Click(object sender, EventArgs e) { myConnection.ConnectionString = cnStr; try { //more magic happens as myConnection opens myConnection.Open(); insertStr = "insert into connectAssignment values ('" + TextBox2.Text + "','" + TextBox3.Text + "','" + TextBox4.Text + "','" + TextBox5.Text + "','" + bigtextthing.Text + "','" + DropDownList1.SelectedItem.Value + "')"; //magic happens as Connection string is assigned to connection object and passes in the SQL statment //associate the command to the the myConnection connection object SqlCommand cmd = new SqlCommand(insertStr, myConnection); cmd.ExecuteNonQuery(); Session["passmyvalue1"] = TextBox2.Text; Session["passmyvalue2"] = TextBox3.Text; Session["passmyvalue3"] = TextBox4.Text; Session["passmyvalue4"] = TextBox5.Text; Session["passmyvalue5"] = bigtextthing.Text; Session["I NEED SOME HELP RIGHT HERE"] =Textbox6.Text; Response.Redirect("receipt.aspx"); } catch { bigtextthing.Text = "Error submitting" + "Possible casues: Internet is down,server is down, check your settings!"; } finally { myConnection.Close(); TextBox2.Text = ""; TextBox3.Text = ""; TextBox4.Text = ""; TextBox5.Text = ""; bigtextthing.Text = ""; } //reset validators? } The recipt page public partial class _Default : System.Web.UI.Page { protected void Page_Load(object sender, EventArgs e) { if(Session["passmyvalue1"] != null) { TextBox1.Text = (string)Session["passmyvalue1"]; TextBox2.Text = (string)Session["passmyvalue2"]; TextBox3.Text = (string)Session["passmyvalue3"]; TextBox4.Text = (string)Session["passmyvalue4"]; TextBox5.Text = (string)Session["passmyvalue5"]; TextBox6.Text = I don't know ; } } } THanks for the help

    Read the article

  • openDatabase Hello World

    - by cf_PhillipSenn
    I'm trying to learn about openDatabase, and I think this I'm getting it to INSERT INTO TABLE1, but I can't verify that the SELECT * FROM TABLE1 is working. <html> <head> <script src="http://www.google.com/jsapi"></script> <script type="text/javascript"> google.load("jquery", "1"); </script> <script type="text/javascript"> var db; $(function(){ db = openDatabase('HelloWorld'); db.transaction( function(transaction) { transaction.executeSql( 'CREATE TABLE IF NOT EXISTS Table1 ' + ' (TableID INTEGER NOT NULL PRIMARY KEY AUTOINCREMENT, ' + ' Field1 TEXT NOT NULL );' ); } ); db.transaction( function(transaction) { transaction.executeSql( 'SELECT * FROM Table1;',function (transaction, result) { for (var i=0; i < result.rows.length; i++) { alert('1'); $('body').append(result.rows.item(i)); } }, errorHandler ); } ); $('form').submit(function() { var xxx = $('#xxx').val(); db.transaction( function(transaction) { transaction.executeSql( 'INSERT INTO Table1 (Field1) VALUES (?);', [xxx], function(){ alert('Saved!'); }, errorHandler ); } ); return false; }); }); function errorHandler(transaction, error) { alert('Oops. Error was '+error.message+' (Code '+error.code+')'); transaction.executeSql('INSERT INTO errors (code, message) VALUES (?, ?);', [error.code, error.message]); return false; } </script> </head> <body> <form method="post"> <input name="xxx" id="xxx" /> <p> <input type="submit" name="OK" /> </p> <a href="http://www.google.com">Cancel</a> </form> </body> </html>

    Read the article

  • Generating Unordered List with PHP + CodeIgniter from a MySQL Database

    - by Tim
    Hello Everyone, I am trying to build a dynamically generated unordered list in the following format using PHP. I am using CodeIgniter but it can just be normal php. This is the end output I need to achieve. <ul id="categories" class="menu"> <li rel="1"> Arts &amp; Humanities <ul> <li rel="2"> Photography <ul> <li rel="3"> 3D </li> <li rel="4"> Digital </li> </ul> </li> <li rel="5"> History </li> <li rel="6"> Literature </li> </ul> </li> <li rel="7"> Business &amp; Economy </li> <li rel="8"> Computers &amp; Internet </li> <li rel="9"> Education </li> <li rel="11"> Entertainment <ul> <li rel="12"> Movies </li> <li rel="13"> TV Shows </li> <li rel="14"> Music </li> <li rel="15"> Humor </li> </ul> </li> <li rel="10"> Health </li> And here is my SQL that I have to work with. -- -- Table structure for table `categories` -- CREATE TABLE IF NOT EXISTS `categories` ( `id` mediumint(8) NOT NULL auto_increment, `dd_id` mediumint(8) NOT NULL, `parent_id` mediumint(8) NOT NULL, `cat_name` varchar(256) NOT NULL, `cat_order` smallint(4) NOT NULL, PRIMARY KEY (`id`) ) ENGINE=MyISAM DEFAULT CHARSET=latin1 AUTO_INCREMENT=1 ; So I know that I am going to need at least 1 foreach loop to generate the first level of categories. What I don't know is how to iterate inside each loop and check for parents and do that in a dynamic way so that there could be an endless tree of children. Thanks for any help you can offer. Tim

    Read the article

  • What database table structure should I use for versions, codebases, deployables?

    - by Zac Thompson
    I'm having doubts about my table structure, and I wonder if there is a better approach. I've got a little database for version control repositories (e.g. SVN), the packages (e.g. Linux RPMs) built therefrom, and the versions (e.g. 1.2.3-4) thereof. A given repository might produce no packages, or several, but if there are more than one for a given repository then a particular version for that repository will indicate a single "tag" of the codebase. A particular version "string" might be used to tag a version of the source code in more than one repository, but there may be no relationship between "1.0" for two different repos. So if packages P and Q both come from repo R, then P 1.0 and Q 1.0 are both built from the 1.0 tag of repo R. But if package X comes from repo Y, then X 1.0 has no relationship to P 1.0. In my (simplified) model, I have the following tables (the x_id columns are auto-incrementing surrogate keys; you can pretend I'm using a different primary key if you wish, it's not really important): repository - repository_id - repository_name (unique) ... version - version_id - version_string (unique for a particular repository) - repository_id ... package - package_id - package_name (unique) - repository_id ... This makes it easy for me to see, for example, what are valid versions of a given package: I can join with the version table using the repository_id. However, suppose I would like to add some information to this database, e.g., to indicate which package versions have been approved for release. I certainly need a new table: package_version - version_id - package_id - package_version_released ... Again, the nature of the keys that I use are not really important to my problem, and you can imagine that the data column is "promotion_level" or something if that helps. My doubts arise when I realize that there's really a very close relationship between the version_id and the package_id in my new table ... they must share the same repository_id. Only a small subset of package/version combinations are valid. So I should have some kind of constraint on those columns, enforcing that ... ... I don't know, it just feels off, somehow. Like I'm including somehow more information than I really need? I don't know how to explain my hesitance here. I can't figure out which (if any) normal form I'm violating, but I also can't find an example of a schema with this sort of structure ... not being a DBA by profession I'm not sure where to look. So I'm asking: am I just being overly sensitive?

    Read the article

  • How to use a LinkButton inside gridview to delete selected username in the code-behind file?

    - by jenifer
    I have a "UserDetail" table in my "JobPost.mdf". I have a "Gridview1" showing the column from "UserDetail" table,which has a primary key "UserName". This "UserName" is originally saved using Membership class function. Now I add a "Delete" linkbutton to the GridView1. This "Delete" is not autogenerate button,I dragged inside the column itemtemplate from ToolBox. The GridView1's columns now become "Delete_LinkButton"+"UserName"(within the UserDetail table)+"City"(within the UserDetail table)+"IsAdmin"(within the UserDetail table) What I need is that by clicking this "delete_linkButton",it will ONLY delete the entire User Entity on the same row (link by the corresponding "UserName") from the "UserDetail" table,as well as delete all information from the AspNetDB.mdf (User,Membership,UserInRole,etc). I would like to fireup a user confirm,but not mandatory. At least I am trying to make it functional in the correct way. for example: Command UserName City IsAdmin delete ken Los Angles TRUE delete jim Toronto FALSE When I click "delete" on the first row, I need all the record about "ken" inside the "UserDetail" table to be removed. Meanwhile, all the record about "ken" in the AspNetDB.mdf will be gone, including UserinRole table. I am new to asp.net, so I don't know how to pass the commandargument of the "Delete_LinkButton" to the code-behind file LinkButton1_Click(object sender, EventArgs e), because I need one extra parameter "UserName". My partial code is listed below: <asp:TemplateField> <ItemTemplate> <asp:LinkButton ID="Delete_LinkButton" runat="server" onclick="LinkButton1_Click1" CommandArgument='<%# Eval("UserName","{0}") %>'>LinkButton</asp:LinkButton> </ItemTemplate> </asp:TemplateField> protected void Delete_LinkButton_Click(object sender, EventArgs e) { ((LinkButton) GridView1.FindControl("Delete_LinkButton")).Attributes.Add("onclick", "'return confirm('Are you sure you want to delete {0} '" + UserName); Membership.DeleteUser(UserName); JobPostDataContext db = new JobPostDataContext(); var query = from u in db.UserDetails where u.UserName == UserName select u; for (var Item in query) { db.UserDetails.DeleteOnSubmit(Item); } db.SubmitChanges(); } Please do help! Thanks in advance.

    Read the article

  • basic operations for modifying a source document with XSLT

    - by SpliFF
    All the tutorials and examples I've found of XSLT processing seem to assume your destination will be a significantly different format/structure to your source and that you know the structure of the source in advance. I'm struggling with finding out how to perform simple "in-place" modifications to a HTML document without knowing anything else about its existing structure. Could somebody show me a clear example that, given an arbitrary unknown HTML source will: 1.) delete the classname 'foo' from all divs 2.) delete a node if its empty (ie <p></p>) 3.) delete a <p> node if its first child is <br> 4.) add newattr="newvalue" to all H1 5.) replace 'heading' in text nodes with 'title' 6.) wrap all <u> tags in <b> tags (ie, <u>foo</u> -> <b><u>foo</u></b>) 7.) output the transformed document without changing anything else The above examples are the primary types of transform I wish to accomplish. Understanding how to do the above will go a long way towards helping me build more complex transforms. To help clarify/test the examples here is a sample source and output, however I must reiterate that I want to work with arbitrary samples without rewriting the XSLT for each source: <!doctype html> <html> <body> <h1>heading</h1> <p></p> <p><br>line</p> <div class="foo bar"><u>baz</u></div> <p>untouched</p> </body> </html> output: <!doctype html> <html> <body> <h1 newattr="newvalue">title</h1> <div class="bar"><b><u>baz</u></b></div> <p>untouched</p> </body> </html>

    Read the article

  • SQL Server - Get Inserted Record Identity Value when Using a View's Instead Of Trigger

    - by CuppM
    For several tables that have identity fields, we are implementing a Row Level Security scheme using Views and Instead Of triggers on those views. Here is a simplified example structure: -- Table CREATE TABLE tblItem ( ItemId int identity(1,1) primary key, Name varchar(20) ) go -- View CREATE VIEW vwItem AS SELECT * FROM tblItem -- RLS Filtering Condition go -- Instead Of Insert Trigger CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) SELECT Name FROM inserted; END go If I want to insert a record and get its identity, before implementing the RLS Instead Of trigger, I used: DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = SCOPE_IDENTITY(); With the trigger, SCOPE_IDENTITY() no longer works - it returns NULL. I've seen suggestions for using the OUTPUT clause to get the identity back, but I can't seem to get it to work the way I need it to. If I put the OUTPUT clause on the view insert, nothing is ever entered into it. -- Nothing is added to @ItemIds DECLARE @ItemIds TABLE (ItemId int); INSERT INTO vwItem (Name) OUTPUT INSERTED.ItemId INTO @ItemIds VALUES ('MyName'); If I put the OUTPUT clause in the trigger on the INSERT statement, the trigger returns the table (I can view it from SQL Management Studio). I can't seem to capture it in the calling code; either by using an OUTPUT clause on that call or using a SELECT * FROM (). -- Modified Instead Of Insert Trigger w/ Output CREATE TRIGGER IO_vwItem_Insert ON vwItem INSTEAD OF INSERT AS BEGIN -- RLS Security Checks on inserted Table -- Insert Records Into Table INSERT INTO tblItem (Name) OUTPUT INSERTED.ItemId SELECT Name FROM inserted; END go -- Calling Code INSERT INTO vwItem (Name) VALUES ('MyName'); The only thing I can think of is to use the IDENT_CURRENT() function. Since that doesn't operate in the current scope, there's an issue of concurrent users inserting at the same time and messing it up. If the entire operation is wrapped in a transaction, would that prevent the concurrency issue? BEGIN TRANSACTION DECLARE @ItemId int; INSERT INTO tblItem (Name) VALUES ('MyName'); SELECT @ItemId = IDENT_CURRENT('tblItem'); COMMIT TRANSACTION Does anyone have any suggestions on how to do this better? I know people out there who will read this and say "Triggers are EVIL, don't use them!" While I appreciate your convictions, please don't offer that "suggestion".

    Read the article

  • how to reloadData in tableView when tableview access data from database.

    - by Ajeet Kumar Yadav
    I am new in iphone i am developing a application that take value from data base and display data in tableview. in this application we save data from one data table to other data table this is when add first time work and when we do second time application is crash. how to solve this problem i am not understand code is given bellow my appdelegate code for insert value from one table to other is given bellow -(void)sopinglist { //////databaseName= @"SanjeevKapoor.sqlite"; databaseName =@"AjeetTest.sqlite"; NSArray *documentPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES); NSString *documentsDir = [documentPaths objectAtIndex:0]; databasePath =[documentsDir stringByAppendingPathComponent:databaseName]; [self checkAndCreateDatabase]; list1 = [[NSMutableArray alloc] init]; sqlite3 *database; if (sqlite3_open([databasePath UTF8String], &database) == SQLITE_OK) { if(addStmt == nil) { ///////////const char *sql = "insert into Dataa(item) Values(?)"; const char *sql = " insert into Slist select * from alootikki"; ///////////// const char *sql =" Update Slist ( Incredients, Recipename,foodtype) Values(?,?,?)"; if(sqlite3_prepare_v2(database, sql, -1, &addStmt, NULL) != SQLITE_OK) NSAssert1(0, @"Error while creating add statement. '%s'", sqlite3_errmsg(database)); } /////for( NSString * j in k) sqlite3_bind_text(addStmt, 1, [k UTF8String], -1, SQLITE_TRANSIENT); //sqlite3_bind_int(addStmt,1,i); // sqlite3_bind_text(addStmt, 1, [coffeeName UTF8String], -1, SQLITE_TRANSIENT); // sqlite3_bind_double(addStmt, 2, [price doubleValue]); if(SQLITE_DONE != sqlite3_step(addStmt)) NSAssert1(0, @"Error while inserting data. '%s'", sqlite3_errmsg(database)); else //SQLite provides a method to get the last primary key inserted by using sqlite3_last_insert_rowid coffeeID = sqlite3_last_insert_rowid(database); //Reset the add statement. sqlite3_reset(addStmt); // sqlite3_clear_bindings(detailStmt); //} } sqlite3_finalize(addStmt); addStmt = nil; sqlite3_close(database); } And the table View code for access data from database is given bellow SanjeevKapoorAppDelegate *appDelegate =(SanjeevKapoorAppDelegate *)[[UIApplication sharedApplication] delegate]; [appDelegate sopinglist]; ////[appDelegate recpies]; /// NSArray *a =[[appDelegate list1] componentsJoinedByString:@","]; k= [[appDelegate list1] componentsJoinedByString:@","];

    Read the article

  • No Method Error Undefined method 'save' for nil:NilClass

    - by BennyB
    I'm getting this error when i try to create a "Lecture" via my Lecture controller's create method. This used to work but i went on to work on other parts of the app & then of course i come back & something is now throwing this error when a user tries to create a Lecture in my app. I'm sure its something small i'm just overlooking (been at it a while & probably need to take a break)...but I'd appreciate if someone could let me know why this is happening...let me know if i need to post anything else...thx! The error I get NoMethodError in LecturesController#create undefined method `save' for nil:NilClass Rails.root: /Users/name/Sites/rails_projects/app_name Application Trace | Framework Trace | Full Trace app/controllers/lectures_controller.rb:13:in `create' My view to create a new Lecture views/lectures/new.html.erb <% provide(:title, 'Start a Lecture') %> <div class="container"> <div class="content-wrapper"> <h1>Create a Lecture</h1> <div class="row"> <div class="span 6 offset3"> <%= form_for(@lecture) do |f| %> <%= render 'shared/error_messages', :object => f.object %> <div class="field"> <%= f.text_field :title, :placeholder => "What will this Lecture be named?" %> <%= f.text_area :content, :placeholder => "Describe this Lecture & what will be learned..." %> </div> <%= f.submit "Create this Lecture", :class => "btn btn-large btn-primary" %> <% end %> </div> </div> </div> </div> Then my controller where its saying the error is coming from controllers/lectures_controller.rb class LecturesController < ApplicationController before_filter :signed_in_user, :only => [:create, :destroy] before_filter :correct_user, :only => :destroy def index end def new @lecture = current_user.lectures.build if signed_in? end def create if @lecture.save flash[:success] = "Lecture created!" redirect_to @lecture else @activity_items = [ ] render 'new' end end def show @lecture = Lecture.find(params[:id]) end def destroy @lecture.destroy redirect_to root_path end private def correct_user @lecture = current_user.lectures.find_by_id(params[:id]) redirect_to root_path if @lecture.nil? end

    Read the article

  • Generate A Simple Read-Only DAL?

    - by David
    I've been looking around for a simple solution to this, trying my best to lean towards something like NHibernate, but so far everything I've found seems to be trying to solve a slightly different problem. Here's what I'm looking at in my current project: We have an IBM iSeries database as a primary repository for a third party software suite used for our core business (a financial institution). Part of what my team does is write applications that report on or key off of a lot of this data in some way. In the past, we've been manually creating ADO .NET connections (we're using .NET 3.5 and Visual Studio 2008, by the way) and manually writing queries, etc. Moving forward, I'd like to simplify the process of getting data from there for the development team. Rather than creating connections and queries and all that each time, I'd much rather a developer be able to simply do something like this: var something = (from t in TableName select t); And, ideally, they'd just get some IQueryable or IEnumerable of generated entities. This would be done inside a new domain core that I'm building where these entities would live and the applications would interface with it through a request/response service layer. A few things to note are: The entities that correspond to the database tables should be generated once and we'd prefer to manually keep them updated over time. That is, if columns/tables are added to the database then we shouldn't have to do anything. (If some are deleted, of course, it will break, but that's fine.) But if we need to use a new column, we should be able to just add it to the necessary class(es) without having to re-gen the whole thing. The whole thing should be SELECT-only. We're not doing a full DAL here because we don't want to be able to break anything in the database (even accidentally). We don't need any kind of mapping between our domain objects and the generated entity types. The domain barely covers a fraction of the data that's in there, most of it we'll never need, and we would rather just create re-usable maps manually over time. I already have a logical separation for the DAL where my "repository" classes return domain objects, I'm just looking for a better alternative to manual ADO to be used inside the repository classes. Any suggestions? It seems like what I'm doing is just enough outside the normal demand for DAL/ORM tools/tutorials online that I haven't been able to find anything. Or maybe I'm just overlooking something obvious?

    Read the article

  • Insert Registration Data in MySQL using PHP

    - by J M 4
    I may not be asking this in the best way possible but i will try my hardest. Thank you ahead of time for your help: I am creating an enrollment website which allows an individual OR manager to enroll for medical testing services for professional athletes. I will NOT be using the site as a query DB which anybody can view information stored within the database. The information is instead simply stored, and passed along in a CSV format to our network provider so they can use as needed after the fact. There are two possible scenarios: Scenario 1 - Individual Enrollment If an individual athlete chooses to enroll him/herself, they enter their personal information, submit their payment information (credit/bank account) for processing, and their information is stored in an online database as Athlete1. Scenario 2 - Manager Enrollment If a manager chooses to enroll several athletes he manages/ promotes for, he enters his personal information, then enters the personal information for each athlete he wishes to pay for (name, address, ssn, dob, etc), then submits payment information for ALL athletes he is enrolling. This number can range from 1 single athlete, up to 20 athletes per single enrollment (he can return and complete a follow up enrollment for additional athletes). Initially, I was building the database to house ALL information regardless of enrollment type in a single table which housed over 400 columns (think 20 athletes with over 10 fields per athlete such as name, dob, ssn, etc). Now that I think about it more, I believe create multiple tables (manager(s), athlete(s)) may be a better idea here but still not quite sure how to go about it for the following very important reasons: Issue 1 If I list the manager as the parent table, I am afraid the individual enrolling athlete will not show up in the primary table and will not be included in the overall registration file which needs to be sent on to the network providers. Issue 2 All athletes being enrolled by a manager are being stored in SESSION as F1FirstName, F2FirstName where F1 and F2 relate to the id of the fighter. I am not sure technically speaking how to store multiple pieces of information within the same table under separate rows using PHP. For example, all athleteswill have a first name. The very basic theory of what i am trying to do is: If number_of_athletes 1, store F1FirstName in row 1, column 1 of Table "Athletes"; store F1LastName in row 1, column 2 of Table "Athletes"; store F2FirstName in row 2, column 1 of Table "Athletes"; store F2LastName in row 2, column 2 of table "Athletes"; Does this make sense? I know this question is very long and probably difficult so i appreciate the guidance.

    Read the article

  • How do I go about link web content in a database with a nested set model?

    - by wb
    My nested set table is as follows. create table depts ( id int identity(0, 1) primary key , lft int , rgt int , name nvarchar(60) , abbrv nvarchar(20) ); Test departments. insert into depts (lft, rgt, name, abbrv) values (1, 14, 'root', 'r'); insert into depts (lft, rgt, name, abbrv) values (2, 3, 'department 1', 'd1'); insert into depts (lft, rgt, name, abbrv) values (4, 5, 'department 2', 'd2'); insert into depts (lft, rgt, name, abbrv) values (6, 13, 'department 3', 'd3'); insert into depts (lft, rgt, name, abbrv) values (7, 8, 'sub department 3.1', 'd3.1'); insert into depts (lft, rgt, name, abbrv) values (9, 12, 'sub department 3.2', 'd3.2'); insert into depts (lft, rgt, name, abbrv) values (10, 11, 'sub sub department 3.2.1', 'd3.2.1'); My web content table is as follows. create table content ( id int identity(0, 1) , dept_id int , page_name nvarchar(60) , content ntext ); Test content. insert into content (dept_id, page_name, content) values (3, 'index', '<h2>welcome to department 3!</h2>'); insert into content (dept_id, page_name, content) values (4, 'index', '<h2>welcome to department 3.1!</h2>'); insert into content (dept_id, page_name, content) values (6, 'index', '<h2>welcome to department 3.2.1!</h2>'); insert into content (dept_id, page_name, content) values (2, 'what-doing', '<h2>what is department 2 doing?/h2>'); I'm trying to query the correct page content (from the content table) based on the url given. I can easily accomplish this task with a root department. However, querying a department with multiple depths is proving to be a little harder. For example: http://localhost/departments.asp?d3/ (Should return <h2>welcome to department 3!</h2>) http://localhost/departments.asp?d2/what-doing (Should return <h2>what is department 2 doing?</h2>) I'm not sure if this can be create in one query or if there will need to be a recursive function of some sort. Also, if there is nothing after the last / then assume we want the index page. How can this be accomplished? Thank you.

    Read the article

  • hibernate annotation- extending base class - values are not being set - strange error

    - by gt_ebuddy
    I was following Hibernate: Use a Base Class to Map Common Fields and openjpa inheritance tutorial to put common columns like ID, lastModifiedDate etc in base table. My annotated mappings are as follow : BaseTable : @MappedSuperclass public abstract class BaseTable { @Id @GeneratedValue @Column(name = "id") private int id; @Column(name = "lastmodifieddate") private Date lastModifiedDate; ... Person table - @Entity @Table(name = "Person ") public class Person extends BaseTable implements Serializable{ ... Create statement generated : create table Person (id integer not null auto_increment, lastmodifieddate datetime, name varchar(255), primary key (id)) ; After I save a Person object to db, Person p = new Person(); p.setName("Test"); p.setLastModifiedDate(new Date()); .. getSession().save(p); I am setting the date field but, it is saving the record with generated ID and LastModifiedDate = null, and Name="Test". Insert Statement : insert into Person (lastmodifieddate, name) values (?, ?) binding parameter [1] as [TIMESTAMP] - <null> binding parameter [2] as [VARCHAR] - Test Read by ID query : When I do hibernate query (get By ID) as below, It reads person by given ID. Criteria c = getSession().createCriteria(Person.class); c.add(Restrictions.eq("id", id)); Person person= c.list().get(0); //person has generated ID, LastModifiedDate is null select query select person0_.id as id8_, person0_.lastmodifieddate as lastmodi8_, person0_.name as person8_ from Person person0_ - Found [1] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test] as column [person8_] ReadAll query : //read all Query query = getSession().createQuery("from " + Person.class.getName()); List allPersons=query.list(); Corresponding SQL for read all select query select person0_.id as id8_, person0_.lastmodifieddate as lastmodi8_, person0_.name as person8_ from Person person0_ - Found [1] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test] as column [person8_] - Found [2] as column [id8_] - Found [null] as column [lastmodi8_] - Found [Test2] as column [person8_] But when I print out the list in console, its being more weird. it is selecting List of Person object with ID fields = all 0 (why all 0 ?) LastModifiedDate = null Name fields have valid values I don't know whats wrong here. Could you please look at it? FYI, My Hibernate-core version : 4.1.2, MySQL Connector version : 5.1.9 . In summary, There are two issues here Why I am getting All ID Fields =0 when using read all? Why the LastModifiedDate is not being inserted?

    Read the article

< Previous Page | 745 746 747 748 749 750 751 752 753 754 755 756  | Next Page >