Search Results

Search found 10042 results on 402 pages for 'orient db'.

Page 344/402 | < Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >

  • Saving and retrieving image in SQL database from C# problem

    - by Mobin
    I used this code for inserting records in a person table in my DB System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); this.personTableAdapter1.Insert(NIC_Box.Text.Trim(), Application_Box.Text.Trim(), Name_Box.Text.Trim(), Father_Name_Box.Text.Trim(), DOB_TimePicker.Value.Date, Address_Box.Text.Trim(), City_Box.Text.Trim(), Country_Box.Text.Trim(), ms.GetBuffer()); but when i retrieve this with this code byte[] image = (byte[])Person_On_Application.Rows[0][8]; MemoryStream Stream = new MemoryStream(); Stream.Write(image, 0, image.Length); Bitmap Display_Picture = new Bitmap(Stream); Image_Box.Image = Display_Picture; it works perfectly fine but if i update this with my own generated Query like System.IO.MemoryStream ms = new System.IO.MemoryStream(); Image img = Image_Box.Image; img.Save(ms, System.Drawing.Imaging.ImageFormat.Bmp); Query = "UPDATE Person SET Person_Image ='"+ms.GetBuffer()+"' WHERE (Person_NIC = '"+NIC_Box.Text.Trim()+"')"; the next time i use the same code for retrieving the image and displaying it as used above . Program throws an exception

    Read the article

  • dynamically created controls and responding to control events

    - by Dirk
    I'm working on an ASP.NET project in which the vast majority of the forms are generated dynamically at run time (form definitions are stored in a DB for customizability). Therefore, I have to dynamically create and add my controls to the Page every time OnLoad fires, regardless of IsPostBack. This has been working just fine and .NET takes care of managing ViewState for these controls. protected override void OnLoad(EventArgs e) { base.OnLoad(e); RenderDynamicControls() } private void RenderDynamicControls(){ //1. call service layer to retrieve form definition //2. create and add controls to page container } I have a new requirement in which if a user clicks on a given button (this button is created at design time) the page should be re-rendered in a slightly different way. So in addition to the code that executes in OnLoad (i.e. RenderDynamicControls()), I have this code: protected void MyButton_Click(object sender, EventArgs e) { RenderDynamicControlsALittleDifferently() } private void RenderDynamicControlsALittleDifferently() (){ //1. clear all controls from the page container added in RenderDynamicControls() //2. call service layer to retrieve form definition //3. create and add controls to page container } My question is, is this really the only way to accomplish what I'm after? It seems beyond hacky to effectively render the form twice simply to respond to a button click. I gather from my research that this is simply how the page-lifecycle works in ASP.NET: Namely, that OnLoad must fire on every Postback before child events are invoked. Still, it's worthwhile to check with the SO community before having to drink the kool-aid. On a related note, once I get this feature completed, I'm planning on throwing an UpdatePanel on the page to perform the page updates via Ajax. Any code/advice that make that transition easier would be much appreciated. Thanks

    Read the article

  • GET and XMLHttpRequest

    - by Neethusha
    i have an XMLHttpRequest.The request passes a parameter to my php server code in /var/www. But i cannot seem to be able to extract the parameter back at the server side. below i have pasted both the codes: javascript: function getUsers(u) { alert(u);//here u is 'http://start.ubuntu.com/9.10' xmlhttp=new XMLHttpRequest(); var url="http://localhost/servercode.php"+"?q="+u; xmlhttp.onreadystatechange= useHttpResponse; xmlhttp.open("GET",url,true); xmlhttp.send(null); } function useHttpResponse() { if (xmlhttp.readyState==4 ) { var response = eval('('+xmlhttp.responseText+')'); for(i=0;i<response.Users.length;i++) alert(response.Users[i].UserId); } } servercode.php: <?php $q=$_GET["q"]; //$q="http://start.ubuntu.com/9.10"; $con=mysql_connect("localhost","root","blaze"); if(!$con) {die('could not connect to database'.mysql.error()); } mysql_select_db("BLAZE",$con) or die("No such Db"); $result=mysql_query("SELECT * FROM USERURL WHERE URL='$q'"); if($result == null) echo 'nobody online'; else { header('Content-type: text/html'); echo "{\"Users\":["; while($row=mysql_fetch_array($result)) { echo '{"UserId":"'.$row[UsrID].'"},'; } echo "]}"; } mysql_close($con); ?> this is not giving the required result...although the commented statement , where the variable is assigned explicitly the value of the argument works...it alerts me the required output...but somehow the GET method's parameter is not reaching my php or thats how i think it is....pls help....

    Read the article

  • Auditing in Entity Framework.

    - by Gabriel Susai
    After going through Entity Framework I have a couple of questions on implementing auditing in Entity Framework. I want to store each column values that is created or updated to a different audit table. Rightnow I am calling SaveChanges(false) to save the records in the DB(still the changes in context is not reset). Then get the added | modified records and loop through the GetObjectStateEntries. But don't know how to get the values of the columns where their values are filled by stored proc. ie, createdate, modifieddate etc. Below is the sample code I am working on it. //Get the changed entires( ie, records) IEnumerable<ObjectStateEntry> changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Modified); //Iterate each ObjectStateEntry( for each record in the update/modified collection) foreach (ObjectStateEntry entry in changes) { //Iterate the columns in each record and get thier old and new value respectively foreach (var columnName in entry.GetModifiedProperties()) { string oldValue = entry.OriginalValues[columnName].ToString(); string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, oldvalue, newvalue } } changes = context.ObjectStateManager.GetObjectStateEntries(EntityState.Added); foreach (ObjectStateEntry entry in changes) { if (entry.IsRelationship) continue; var columnNames = (from p in entry.EntitySet.ElementType.Members select p.Name).ToList(); foreach (var columnName in columnNames) { string newValue = entry.CurrentValues[columnName].ToString(); //Do Some Auditing by sending entityname, columnname, value } }

    Read the article

  • C# reference collection for storing reference types

    - by ivo s
    I like to implement a collection (something like List<T>) which would hold all my objects that I have created in the entire life span of my application as if its an array of pointers in C++. The idea is that when my process starts I can use a central factory to create all objects and then periodically validate/invalidate their state. Basically I want to make sure that my process only deals with valid instances and I don't re-fetch information I already fetched from the database. So all my objects will basically be in one place - my collection. A cool thing I can do with this is avoid database calls to get data from the database if I already got it (even if I updated it after retrieval its still up-to-date if of course some other process didn't update it but that a different concern). I don't want to be calling new Customer("James Thomas"); again if I initted James Thomas already sometime in the past. Currently I will end up with multiple copies of the same object across the appdomain - some out of sync other in sync and even though I deal with this using timestamp field on the MSSQL server I'd like to keep only one copy per customer in my appdomain (if possible process would be better). I can't use regular collections like List or ArrayList for example because I cannot pass parameters by their real local reference to the their existing Add() methods where I'm creating them using ref so that's not to good I think. So how can this be implemented/can it be implemented at all ? A 'linked list' type of class with all methods working with ref & out params is what I'm thinking now but it may get ugly pretty quickly. Is there another way to implement such collection like RefList<T>.Add(ref T obj)? So bottom line is: I don't want re-create an object if I've already created it before during the entire application life unless I decide to re-create it explicitly (maybe its out-of-date or something so I have to fetch it again from the db). Is there alternatives maybe ?

    Read the article

  • C++ : Avoid lot of boolean variable for multiple verification conditions in trading app

    - by Naveen
    Hi i am a junior dev in trading app... we have a order refresh verification unit. It has to verify order confirmation from exchange. We send a bunch of different request in bulk ( NEW, MODIFY, CANCEL ) to exchange... Verification has to happen for max N times with each T intervals for all orders. if verification successful for all the order before N retry then fine.. otherwise we need to indicate as verification unsuccessfull. i hv done a basic coding done in very urgent like below for( N times ) { for_each ( sent_request_order ) // SENT { 1) get all the refreshed order from DB or shared mem i.e REFRESHED 2) find current sent order in REFRESHED if( not_found ) not refreshed from exchange, continue to next order if( found ) case NEW : //check for new status, mark verification done case MODIFY : //check for modified status.. //if not mark pending, go to next order, //revisit the same after T time case CANCEL : //check for cancelled status.. //if not mark pending, go to next order, //revisit the same after T time } if( all_verified ) exit from verification. wait ( T sec ) } order_verification_pending, order_verification_done, order_visited, order_not_visited, all_verified, all_not_verified ... lot of boolean flags used for indication.. is there any better approach for doing this.... splitting responsibilities across the classes......???? i know this is not a general question.... but still flags are making me tidious to handle...

    Read the article

  • EF Forced Concurrency Checks

    - by Imran
    Hi, I have an issue with EF 4.0 that I hope someone can help with. I currently have an entity that I want to update in a last in wins fashion (i.e. ignore concurrency checks and just overwrite whats in the db with what is submitted). It seems Entity Framework not only includes the primary key of the entity in the where clause of the generated sql, but also any foreign key fields. This is annoying as it means that I don't get true last in wins semantics and need to know what value the fk field had before the update or I get a concurrency exception. I am aware that this can be short circuited by including a foreign key field as well as the navigation property on the entity. I would like to avoid this if possible as it's not a very clean solution. I was just wondering if there was any other way to override this behaviour? It seems like more of a bug than a feature. I have no problem with ef doing concurrency checks if I instruct it to do so but not being able to bypass concurrency completely is a bit of a hindrance as there are many valid scenarios where this is not needed

    Read the article

  • Is this the right way to organize my database tables?

    - by Moss
    So I'm making a website that allows users to build contact lists. So their are users, the users have lists, and the lists have contacts. It seems to me that I need 3 tables for this but I just want to make sure. There would be a User table of course, and then a "List of Lists" table that has the username, and listname, as primary key along with whatever other info we want to attach to the lists as a whole. Finally, for lack of a better word, the List table which would again have the username/listname p.k., then the contact ID and notes and such that the user attaches to that contact on that specific list. I hope that is a clear explanation. For some reason I feel unsure about this arrangement. For one thing if the website becomes popular the List table could swell to billions of rows. And it also feels a little weird that everybody's list info is all jumbled up in the same table. I suppose I could create separate tables for each user and even for each list but that seems like a bad idea for other reasons. My db explanation assumes I can use foreign keys on my tables which at the moment isn't actually an option. If I can't get InnoDB tables enabled I will probably use ID's for the lists instead of depending on a compound key. Maybe I should do this anyway?

    Read the article

  • SQL Server 2008 - Shrinking the Transaction Log - Any way to automate?

    - by Albert
    I went in and checked my Transaction log the other day and it was something crazy like 15GB. I ran the following code: USE mydb GO BACKUP LOG mydb WITH TRUNCATE_ONLY GO DBCC SHRINKFILE(mydb_log,8) GO Which worked fine, shrank it down to 8MB...but the DB in question is a Log Shipping Publisher, and the log is already back up to some 500MB and growing quick. Is there any way to automate this log shrinking, outside of creating a custom "Execute T-SQL Statement Task" Maintenance Plan Task, and hooking it on to my log backup task? If that's the best way then fine...but I was just thinking that SQL Server would have a better way of dealing with this. I thought it was supposed to shrink automatically whenever you took a log backup, but that's not happening (perhaps because of my log shipping, I don't know). Here's my current backup plan: Full backups every night Transaction log backups once a day, late morning (maybe hook the Log shrinking onto this...doesn't need to be shrank every day though) Or maybe I just run it once a week, after I run a full backup task? What do you all think?

    Read the article

  • How to use SQLErrorCodeSQLExceptionTranslator and DAO class with @Repository in Spring?

    - by GuidoMB
    I'm using Spring 3.0.2 and I have a class called MovieDAO that uses JDBC to handle the db. I have set the @Repository annotations and I want to convert the SQLException to the Spring's DataAccessException I have the following example: @Repository public class JDBCCommentDAO implements CommentDAO { static JDBCCommentDAO instance; ConnectionManager connectionManager; private JDBCCommentDAO() { connectionManager = new ConnectionManager("org.postgresql.Driver", "postgres", "postgres"); } static public synchronized JDBCCommentDAO getInstance() { if (instance == null) instance = new JDBCCommentDAO(); return instance; } @Override public Collection<Comment> getComments(User user) throws DAOException { Collection<Comment> comments = new ArrayList<Comment>(); try { String query = "SELECT * FROM Comments WHERE Comments.userId = ?"; Connection conn = connectionManager.getConnection(); PreparedStatement stmt = conn.prepareStatement(query); stmt = conn.prepareStatement(query); stmt.setInt(1, user.getId()); ResultSet result = stmt.executeQuery(); while (result.next()) { Movie movie = JDBCMovieDAO.getInstance().getLightMovie(result.getInt("movie")); comments.add(new Comment(result.getString("text"), result.getInt("score"), user, result.getDate("date"), movie)); } connectionManager.closeConnection(conn); } catch (SQLException e) { e.printStackTrace(); //CONVERT TO DATAACCESSEXCEPTION } return comments; } } I Don't know how to get the Translator and I don't want to extends any Spring class, because that is why I'm using the @Repository annotation

    Read the article

  • MYSQL not running on Ubuntu OS - Error 2002.

    - by mgj
    Hi, I am a novice to mysql DB. I am trying to run the MYSQL Server on Ubuntu 10.04. Through Synaptic Package Manager I am have installed the mysql version: mysql-client-5.1 I wonder that how was the database password set for the mysql-client software that I installed through the above way.It would be nice if you could enlighten me on this. When I tried running this database, I encountered the error given below: mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) mohnish@mohnish-laptop:/var/lib$ I referred to a similar question posted by another user. I didn't find a solution through the proposed answers. For instance when I tried the solutions posted for the similar question I got the following: mohnish@mohnish-laptop:/var/lib$ service start mysqld start: unrecognized service mohnish@mohnish-laptop:/var/lib$ ps -u mysql ERROR: User name does not exist. ********* simple selection ********* ********* selection by list ********* -A all processes -C by command name -N negate selection -G by real group ID (supports names) -a all w/ tty except session leaders -U by real user ID (supports names) -d all except session leaders -g by session OR by effective group name -e all processes -p by process ID T all processes on this terminal -s processes in the sessions given a all w/ tty, including other users -t by tty g OBSOLETE -- DO NOT USE -u by effective user ID (supports names) r only running processes U processes for specified users x processes w/o controlling ttys t by tty *********** output format ********** *********** long options *********** -o,o user-defined -f full --Group --User --pid --cols --ppid -j,j job control s signal --group --user --sid --rows --info -O,O preloaded -o v virtual memory --cumulative --format --deselect -l,l long u user-oriented --sort --tty --forest --version -F extra full X registers --heading --no-heading --context ********* misc options ********* -V,V show version L list format codes f ASCII art forest -m,m,-L,-T,H threads S children in sum -y change -l format -M,Z security data c true command name -c scheduling class -w,w wide output n numeric WCHAN,UID -H process hierarchy mohnish@mohnish-laptop:/var/lib$ which mysql /usr/bin/mysql mohnish@mohnish-laptop:/var/lib$ mysql ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2) I even tried referring to http://forums.mysql.com/read.php?11,27769,84713#msg-84713 but couldn't find anything useful. Please let me know how I could tackle this error. Thank you very much..

    Read the article

  • security deleting a mysql row with jQuery $.post

    - by FFish
    I want to delete a row in my database and found an example on how to do this with jQuery's $.post() Now I am wondering about security though.. Can someone send a POST request to my delete-row.php script from another website? JS function deleterow(id) { // alert(typeof(id)); // number if (confirm('Are you sure want to delete?')) { $.post('delete-row.php', {album_id:+id, ajax:'true'}, function() { $("#row_"+id).fadeOut("slow"); }); } } PHP: delete-row.php <?php require_once("../db.php"); mysql_connect(DB_SERVER, DB_USER, DB_PASSWORD) or die("could not connect to database " . mysql_error()); mysql_select_db(DB_NAME) or die("could not select database " . mysql_error()); if (isset($_POST['album_id'])) { $query = "DELETE FROM albums WHERE album_id = " . $_POST['album_id']; $result = mysql_query($query); if (!$result) die('Invalid query: ' . mysql_error()); echo "album deleted!"; } ?>

    Read the article

  • How should I design my MYSQL table/s?

    - by yaya3
    I built a really basic php/mysql site for an architect that uses one 'projects' table. The website showcases various projects that he has worked on. Each project contained one piece of text and one series of images. Original projects table (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL auto_increment, `project_name` text, `project_text` text, `image_filenames` text, `image_folder` text, `project_pdf` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; The client now requires the following, and I'm not sure how to handle the expansions in my DB. My suspicion is that I will need an additional table. Each project now have 'pages'. Pages either contain... One image One "piece" of text One image and one piece of text. Each page could use one of three layouts. As each project does not currently have more than 4 pieces of text (a very risky assumption) I have expanded the original table to accommodate everything. New projects table attempt (create syntax): CREATE TABLE `projects` ( `project_id` int(11) NOT NULL AUTO_INCREMENT, `project_name` text, `project_pdf` text, `project_image_folder` text, `project_img_filenames` text, `pages_with_text` text, `pages_without_img` text, `pages_layout_type` text, `pages_title` text, `page_text_a` text, `page_text_b` text, `page_text_c` text, `page_text_d` text, PRIMARY KEY (`project_id`) ) ENGINE=MyISAM AUTO_INCREMENT=8 DEFAULT CHARSET=latin1; In trying to learn more about MYSQL table structuring I have just read an intro to normalization and A Simple Guide to Five Normal Forms in Relational Database Theory. I'm going to keep reading! Thanks in advance

    Read the article

  • Creating self-referential tables with polymorphism in SQLALchemy

    - by Jace
    I'm trying to create a db structure in which I have many types of content entities, of which one, a Comment, can be attached to any other. Consider the following: from datetime import datetime from sqlalchemy import create_engine from sqlalchemy import Column, ForeignKey from sqlalchemy import Unicode, Integer, DateTime from sqlalchemy.orm import relation, backref from sqlalchemy.ext.declarative import declarative_base Base = declarative_base() class Entity(Base): __tablename__ = 'entities' id = Column(Integer, primary_key=True) created_at = Column(DateTime, default=datetime.utcnow, nullable=False) edited_at = Column(DateTime, default=datetime.utcnow, onupdate=datetime.utcnow, nullable=False) type = Column(Unicode(20), nullable=False) __mapper_args__ = {'polymorphic_on': type} # <...insert some models based on Entity...> class Comment(Entity): __tablename__ = 'comments' __mapper_args__ = {'polymorphic_identity': u'comment'} id = Column(None, ForeignKey('entities.id'), primary_key=True) _idref = relation(Entity, foreign_keys=id, primaryjoin=id == Entity.id) attached_to_id = Column(Integer, ForeignKey('entities.id'), nullable=False) #attached_to = relation(Entity, remote_side=[Entity.id]) attached_to = relation(Entity, foreign_keys=attached_to_id, primaryjoin=attached_to_id == Entity.id, backref=backref('comments', cascade="all, delete-orphan")) text = Column(Unicode(255), nullable=False) engine = create_engine('sqlite://', echo=True) Base.metadata.bind = engine Base.metadata.create_all(engine) This seems about right, except SQLAlchemy doesn't like having two foreign keys pointing to the same parent. It says ArgumentError: Can't determine join between 'entities' and 'comments'; tables have more than one foreign key constraint relationship between them. Please specify the 'onclause' of this join explicitly. How do I specify onclause?

    Read the article

  • How can I solve this NHibernate Querying in an n-tier architecture?

    - by Tyler Wright
    I've hit a wall with trying to decouple NHibernate from my services layer. My architecture looks like this: web - services - repositories - nhibernate - db I want to be able to spawn nhibernate queries from my services layer and possibly my web layer without those layers knowing what orm they are dealing with. Currently, I have a find method on all of my repositories that takes in IList<object[]> criteria. This allows me to pass in a list of criteria such as new object() {"Username", usernameVariable}; from anywhere in my architecture. NHibernate takes this in and creates a new Criteria object and adds in the passed in criteria. This works fine for basic searches from my service layer, but I would like to have the ability to pass in a query object that my repository translates into an NHibernate Criteria. Really, I would love to implement something like what is described in this question: Is there value in abstracting nhibernate criterion. I'm just not finding any good resources on how to implement something like this. Is the method described in that question a good approach? If so, could anyone provide some pointers on how to implement such a solution?

    Read the article

  • How to handle large dataset with JPA (or at least with Hibernate)?

    - by Roman
    I need to make my web-app work with really huge datasets. At the moment I get either OutOfMemoryException or output which is being generated 1-2 minutes. Let's put it simple and suppose that we have 2 tables in DB: Worker and WorkLog with about 1000 rows in the first one and 10 000 000 rows in the second one. Latter table has several fields including 'workerId' and 'hoursWorked' fields among others. What we need is: count total hours worked by each user; list of work periods for each user. The most straightforward approach (IMO) for each task in plain SQL is: 1) select Worker.name, sum(hoursWorked) from Worker, WorkLog where Worker.id = WorkLog.workerId group by Worker.name; //results of this query should be transformed to Multimap<Worker, Long> 2) select Worker.name, WorkLog.start, WorkLog.hoursWorked from Worker, WorkLog where Worker.id = WorkLog.workerId; //results of this query should be transformed to Multimap<Worker, Period> //if it was JDBC then it would be vitally //to set resultSet.setFetchSize (someSmallNumber), ~100 So, I have two questions: how to implement each of my approaches with JPA (or at least with Hibernate); how would you handle this problem (with JPA or Hibernate of course)?

    Read the article

  • Beginner's guide for rails

    - by piemesons
    Hello friends i need a book/ tutorials for rails without using scoffold. All the books mentioned in the other questions are creating some depot application or etc using scaffold and then explaining things. I believe in the thing that creating big depot is worthless when you are not getting anything. All of my frnds are suggesting me to go for this pragmatic book. look i understand the book is good but i m not getting the proper things. I got the logic cause i m good in php doctrine. asp.net c c++ so i m getting the things but i m not feeling confident. I want to have a another book. Can anybody suggest me some other books. I m saying this cause it really feels good when u create a simple form and insert the values in db and u can retreive those values and MOST IMP you can explain the whole logic of the that small form application instaed of that colorfull Depot application in which things are done with scaffold and u r not getting the thing and u are confused abt the real picture.

    Read the article

  • Facebook App : Problem with storing user info

    - by Jack
    I want to store basic user info like name and proxied email into my MySQL database. Here's my code $user = $facebook->require_login($required_permissions = 'publish_stream','status_update','email'); $con = mysql_connect("xxxx","xxxxx","xxxxx"); if (!$con) { die('Could not connect: ' . mysql_error()); } mysql_select_db("xxx_userdb", $con); //Uptil Here it is working fine...No problem with db connection $fql_firstname = "select first_name from user where uid = '$user'"; $fql_lastname = "select last_name from user where uid = '$user'"; $fql_email = "select proxied_email from user where uid = '$user'"; $fql_fname_result = $facebook->api_client->fql_query($fql_firstname); $fql_lname_result = $facebook->api_client->fql_query($fql_lastname); $fql_email_result = $facebook->api_client->fql_query($fql_email); echo "<pre>FQL Result:" . print_r($fql_fname_result,true) . "</pre>"; echo "<pre>FQL Result:" . print_r($fql_lname_result,true) . "</pre>"; echo "<pre>FQL Result:" . print_r($fql_email_result,true) . "</pre>"; Here's the output I get FQL Result: FQL Result: FQL Result: What am I doing wrong?

    Read the article

  • TransactionScope Prematurely Completed

    - by Chris
    I have a block of code that runs within a TransactionScope and within this block of code I make several calls to the DB. Selects, Updates, Creates, and Deletes, the whole gamut. When I execute my delete I execute it using an extension method of the SqlCommand that will automatically resubmit the query if it deadlocks as this query could potentially hit a deadlock. I believe the problem occurs when a deadlock is hit and the function tries to resubmit the query. This is the error I receive: The transaction associated with the current connection has completed but has not been disposed. The transaction must be disposed before the connection can be used to execute SQL statements. This is the simple code that executes the query (all of the code below executes within the using of the TransactionScope): using (sqlCommand.Connection = new SqlConnection(ConnectionStrings.App)) { sqlCommand.Connection.Open(); sqlCommand.ExecuteNonQueryWithDeadlockHandling(); } Here is the extension method that resubmits the deadlocked query: public static class SqlCommandExtender { private const int DEADLOCK_ERROR = 1205; private const int MAXIMUM_DEADLOCK_RETRIES = 5; private const int SLEEP_INCREMENT = 100; public static void ExecuteNonQueryWithDeadlockHandling(this SqlCommand sqlCommand) { int count = 0; SqlException deadlockException = null; do { if (count > 0) Thread.Sleep(count * SLEEP_INCREMENT); deadlockException = ExecuteNonQuery(sqlCommand); count++; } while (deadlockException != null && count < MAXIMUM_DEADLOCK_RETRIES); if (deadlockException != null) throw deadlockException; } private static SqlException ExecuteNonQuery(SqlCommand sqlCommand) { try { sqlCommand.ExecuteNonQuery(); } catch (SqlException exception) { if (exception.Number == DEADLOCK_ERROR) return exception; throw; } return null; } } The error occurs on the line that executes the nonquery: sqlCommand.ExecuteNonQuery();

    Read the article

  • Updating a specific key/value inside of an array field with MongoDB

    - by Jesta
    As a preface, I've been working with MongoDB for about a week now, so this may turn out to be a pretty simple answer. I have data already stored in my collection, we will call this collection content, as it contains articles, news, etc. Each of these articles contains another array called author which has all of the author's information (Address, Phone, Title, etc). The Goal - I am trying to create a query that will update the author's address on every article that the specific author exists in, and only the specified author block (not others that exist within the array). Sort of a "Global Update" to a specific article that affects his/her information on every piece of content that exists. Here is an example of what the content with the author looks like. { "_id" : ObjectId("4c1a5a948ead0e4d09010000"), "authors" : [ { "user_id" : null, "slug" : "joe-somebody", "display_name" : "Joe Somebody", "display_title" : "Contributing Writer", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, { "user_id" : null, "slug" : "jane-somebody", "display_name" : "Jane Somebody", "display_title" : "Editor", "display_company_name" : null, "email" : null, "phone" : null, "fax" : null, "address" : null, "address2" : null, "city" : null, "state" : null, "zip" : null, "country" : null, "image" : null, "url" : null, "blurb" : null }, ], "tags" : [ "tag1", "tag2", "tag3" ], "title" : "Title of the Article" } I can find every article that this author has created by running the following command: db.content.find({authors: {$elemMatch: {slug: 'joe-somebody'}}}); So theoretically I should be able to update the authors record for the slug joe-somebody but not jane-somebody (the 2nd author), I am just unsure exactly how you reach in and update every record for that author. I thought I was on the right track, and here's what I've tried. b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {address: '1234 Avenue Rd.'} }, false, true ); I just believe there's something I am missing in the $set statement to specify the correct author and point inside of the correct array. Any ideas? **Update** I've also tried this now: b.content.update( {authors: {$elemMatch: {slug: 'joe-somebody'} } }, {$set: {'authors.$.address': '1234 Avenue Rd.'} }, false, true );

    Read the article

  • iPhone OS: Why is my managedModelObject not complying with Key Value Coding?

    - by nickthedude
    Ok so I'm trying to build this stat tracker for my app and I have built a data model object called statTracker that keeps track of all the stuff I want it to. I can set and retrieve values using the selectors, but if I try and use KVC (ie setValue: forKey: ) everything goes bad and says my StatTracker class is not KVC compliant: valueForUndefinedKey:]: the entity StatTracker is not key value coding-compliant for the key "timesLauched".' 2010-05-18 15:55:08.573 here's the code that is triggering it: NSArray *statTrackerArray = [[NSArray alloc] init]; statTrackerArray = [[CoreDataSingleton sharedCoreDataSingleton] getStatTracker]; NSNumber *number1 = [[NSNumber alloc] init]; number1 = [NSNumber numberWithInt:(1 + [[(StatTracker *)[statTrackerArray objectAtIndex:0] valueForKey:@"timesLauched"] intValue])]; [(StatTracker *)[statTrackerArray objectAtIndex:0] setValue:number1 forKey:@"timesLaunched" ]; NSError *error; if (![[[CoreDataSingleton sharedCoreDataSingleton] managedObjectContext] save:&error]) { NSLog(@"error writing to db"); } Not sure if this is enough code for you folks let me know what you need if you do need more. This would be so sweet if I could use KVC because I could then abstract all this stat tracking stuff into a single method call with a string argument for the value in question. At least that is what I hope to accomplish here. I'm actually now understanding the power of KVC but now I'm just trying to figure out how to make it work. Thanks! Nick

    Read the article

  • ASP.NET DropDownList posting ""

    - by Daniel
    I am using ASP.NET forms version 3.5 in VB I have a dropdownlist that is filled with data from a DB with a list of countries The code for the dropdown list is <label class="ob_label"> <asp:DropDownList ID="lstCountry" runat="server" CssClass="ob_forminput"> </asp:DropDownList> Country*</label> And the code that the list is Dim selectSQL As String = "exec dbo.*******************" ' Define the ADO.NET objects. Dim con As New SqlConnection(connectionString) Dim cmd As New SqlCommand(selectSQL, con) Dim reader As SqlDataReader ' Try to open database and read information. Try con.Open() reader = cmd.ExecuteReader() ' For each item, add the author name to the displayed ' list box text, and store the unique ID in the Value property. Do While reader.Read() Dim newItem As New ListItem() newItem.Text = reader("AllSites_Countries_Name") newItem.Value = reader("AllSites_Countries_Id") CType(LoginViewCart.FindControl("lstCountry"), DropDownList).Items.Add(newItem) Loop reader.Close() CType(LoginViewCart.FindControl("lstCountry"), DropDownList).SelectedValue = 182 Catch Err As Exception Response.Redirect("~/error-on-page/") MailSender.SendMailMessage("*********************", "", "", OrangeBoxSiteId.SiteName & " Error Catcher", "<p>Error in sub FillCountry</p><p>Error on page:" & HttpContext.Current.Request.Url.AbsoluteUri & "</p><p>Error details: " & Err.Message & "</p>") Response.Redirect("~/error-on-page/") Finally con.Close() End Try When the form is submitted an error occurs which says that the string "" cannot be converted to the datatype integer. For some reason the dropdownlist is posting "" rather than the value for the selected country.

    Read the article

  • RavenDB Ids and ASP.NET MVC3 Routes

    - by goober
    Hey all, Just building a quick, simple site with MVC 3 RC2 and RavenDB to test some things out. I've been able to make a bunch of projects, but I'm curious as to how Html.ActionLink() handles a raven DB ID. My example: I have a Document called "reasons" (a reason for something, just text mostly), which has reason text and a list of links. I can add, remove, and do everything else fine via my repository. Below is the part of my razor view that lists each reason in a bulleted list, with an Edit link as the first text: @foreach(var Reason in ViewBag.ReasonsList) { <li>@Html.ActionLink("Edit", "Reasons", "Edit", new { id = Reason.Id }, null) @Reason.ReasonText</li> <ul> @foreach (var reasonlink in Reason.ReasonLinks) { <li><a href="@reasonlink.URL">@reasonlink.URL</a></li> } </ul> } The Problem This works fine, except for the edit link. While the values and code here appear to work directly (i.e the link is firing directly), RavenDB saves my document's ID as "reasons/1". So, when the URL happens and it passes the ID, the resulting route is "http://localhost:4976/Reasons/Edit/reasons/2". So, the ID is appended correctly, but MVC is interpreting it as its own route. Any suggestions on how I might be able to get around this? Do I need to create a special route to handle it or is there something else I can do?

    Read the article

  • How to have a where clause on an insert or an update in Linq to Sql?

    - by Kelsey
    I am trying to convert the following stored proc to a LinqToSql call (this is a simplied version of the SQL): INSERT INTO [MyTable] ([Name], [Value]) SELECT @name, @value WHERE NOT EXISTS(SELECT [Value] FROM [MyTable] WHERE [Value] = @value) The DB does not have a constraint on the field that is getting checked for so in this specific case the check needs to be made manually. Also there are many items constantly being inserted as well so I need to make sure that when this specific insert happens there is no dupe of the value field. My first hunch is to do the following: using (TransactionScope scope = new TransactionScope()) { if (Context.MyTables.SingleOrDefault(t => t.Value == in.Value) != null) { MyLinqModels.MyTable t = new MyLinqModels.MyTable() { Name = in.Name, Value = in.Value }; // Do some stuff in the transaction scope.Complete(); } } This is the first time I have really run into this scenario so I want to make sure I am going about it the right way. Does this seem correct or can anyone suggest a better way of going about it without having two seperate calls? Edit: I am running into a similar issue with an update: UPDATE [AnotherTable] SET [Code] = @code WHERE [ID] = @id AND [Code] IS NULL How would I do the same check with Linqtosql? I assume I need to do a get and then set all the values and submit but what if someone updates [Code] to something other than null from the time I do the get to when the update executes? Same problem as the insert...

    Read the article

  • How to output KML by GAE

    - by Niklas R
    Hi I use KML for a google map where entities have a geopt.db coordinate and soft memory limit was exceeded with 213.465 MB after servicing 1 requests total. The log says /list.kml 200 13130ms 10211cpu_ms 4238api_cpu_ms The file list.kml which outputs about 455,7 KB is a template as follows <?xml version="1.0" encoding="UTF-8"?><kml xmlns="http:// www.opengis.net/kml/2.2" xmlns:gx="http://www.google.com/kml/ext/2.2" xmlns:kml="http://www.opengis.net/kml/2.2" xmlns:atom="http:// www.w3.org/2005/Atom"> <Document>{% for a in list %} <Placemark> <name> </name> <description> <![CDATA[<a href="http://{{host}}/{{a.key.id}}"> {{ a.title }} </a> <br/>{{a.text}}]]> </description> <Style> <IconStyle> <Icon> <href> http://www.google.com/intl/en_us/mapfiles/ms/icons/green-dot.png </href> </Icon> </IconStyle> </Style> <Point> <coordinates> {{a.geopt.lon|floatformat:2}},{{a.geopt.lat|floatformat:2}} </coordinates> </Point> </Placemark> {% endfor %} </Document> </kml> Is there a memory leak in the template or the python that passes the list variable? Can I improve using other template engine or other framework than default? Is kmz compression a good idea in this case? Thanks in advance for any suggestion where or how to change the code.

    Read the article

< Previous Page | 340 341 342 343 344 345 346 347 348 349 350 351  | Next Page >