Search Results

Search found 2636 results on 106 pages for 'transaction isolation'.

Page 91/106 | < Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >

  • Zend: Fetching row from session db table after generating session id

    - by Nux
    Hi, I'm trying to update the session table used by Zend_Session_SaveHandler_DbTable directly after authenticating the user and writing the session to the DB. But I can neither update nor fetch the newly inserted row, even though the session id I use to check (Zend_Session::getId()) is valid and the row is indeed inserted into the table. Upon fetching all session ids (on the same request) the one I newly inserted is missing from the results. It does appear in the results if I fetch it with something else. I've checked whether it is a problem with transactions and that does not seem to be the problem - there is no active transaction when I'm fetching the results. I've also tried fetching a few seconds after writing using sleep(), which doesn't help. $auth->getStorage()->write($ident); //sleep(1) $update = $this->db->update('session', array('uid' => $ident->user_id), 'id='.$this->db->quote(Zend_Session::getId())); $qload = 'SELECT id FROM session'; $load = $this->db->fetchAll($qload); echo $qload; print_r($load); $update fails. $load doesn't contain the row that was written with $auth-getStorage()-write($identity). $qload does contain the correct query - copying it to somewhere else leads to the expected result, that is the inserted row is included in the results. Database used is MySQL - InnoDB. If someone knows how to directly fix this (i.e. on the same request, not doing something like updating after redirecting to another page) without modifying Zend_Session_SaveHandler_DbTable: Thank you very much!

    Read the article

  • ADO/SQL Server: What is the error code for "timeout expired"?

    - by Ian Boyd
    i'm trying to trap a "timeout expired" error from ADO. When a timeout happens, ADO returns: Number: 0x80040E31 (DB_E_ABORTLIMITREACHED in oledberr.h) SQLState: HYT00 NativeError: 0 The NativeError of zero makes sense, since the timeout is not a function of the database engine (i.e. SQL Server), but of ADO's internal timeout mechanism. The Number (i.e. the COM hresult) looks useful, but the definition of DB_E_ABORTLIMITREACHED in oledberr.h says: Execution stopped because a resource limit was reached. No results were returned. This error could apply to things besides "timeout expired" (some potentially server-side), such as a governor that limits: CPU usage I/O reads/writes network bandwidth and stops a query. The final useful piece is SQLState, which is a database-independent error code system. Unfortunately the only reference for SQLState error codes i can find have no mention of HYT00. What to do? What do do? Note: i can't trust 0x80040E31 (DB_E_ABORTLIMITREACHED) to mean "timeout expired", anymore than i could trust 0x80004005 (E_UNSPECIFIED_ERROR) to mean "Transaction was deadlocked on lock resources with another process and has been chosen as the deadlock victim". My pseudo-question becomes: does anyone have documentation on what the SQLState "HYT000" means? And my real question still remains: How can i specifically trap an ADO timeout expired exception thrown by ADO? Gotta love the questions where the developer is trying to "do the right thing", but nobody knows how to do the right thing. Also gotta love how googling for DB_E_ABORTLIMITREACHED and this question is #9, with MSDN nowhere to be found. Update 3 From the OLEdb ICommand.Execute reference: DB_E_ABORTLIMITREACHED Execution has been aborted because a resource limit has been reached. For example, a query timed out. No results have been returned. "For example", meaning not an exhaustive list.

    Read the article

  • PHP OOP: Providing Domain Entities with "Identity"

    - by sunwukung
    Bit of an abstract problem here. I'm experimenting with the Domain Model pattern, and barring my other tussles with dependencies - I need some advice on generating Identity for use in an Identity Map. In most examples for the Data Mapper pattern I've seen (including the one outlined in this book: http://apress.com/book/view/9781590599099) - the user appears to manually set the identity for a given Domain Object using a setter: $UserMapper = new UserMapper; //returns a fully formed user object from record sets $User = $UserMapper->find(1); //returns an empty object with appropriate properties for completion $UserBlank = $UserMapper->get(); $UserBlank->setId(); $UserBlank->setOtherProperties(); Now, I don't know if I'm reading the examples wrong - but in the first $User object, the $id property is retrieved from the data store (I'm assuming $id represents a row id). In the latter case, however, how can you set the $id for an object if it has not yet acquired one from the data store? The problem is generating a valid "identity" for the object so that it can be maintained via an Identity Map - so generating an arbitrary integer doesn't solve it. My current thinking is to nominate different fields for identity (i.e. email) and demanding their presence in generating blank Domain Objects. Alternatively, demanding all objects be fully formed, and using all properties as their identity...hardly efficient. (Or alternatively, dump the Domain Model concept and return to DBAL/DAO/Transaction Scripts...which is seeming increasingly elegant compared to the ORM implementations I've seen...)

    Read the article

  • Detecting branch reintegration or merge in pre-commit script

    - by Shawn Chin
    Within a pre-commit script, is it possible (and if so, how) to identify commits stemming from an svn merge? svnlook changed ... shows files that have changed, but does not differentiate between merges and manual edits. Ideally, I would also like to differentiate between a standard merge and a merge --reintegrate. Background: I'm exploring the possibility of using pre-commit hooks to enforce SVN usage policies for our project. One of the policies state that some directories (such as /trunk) should not be modified directly, and changed only through the reintegration of feature branches. The pre-commit script would therefore reject all changes made to these directories apart from branch reintegrations. Any ideas? Update: I've explored the svnlook command, and the closest I've got is to detect and parse changes to the svn:mergeinfo property of the directory. This approach has some drawback: svnlook can flag up a change in properties, but not which property was changed. (a diff with the proplist of the previous revision is required) By inspecting changes in svn:mergeinfo, it is possible to detect that svn merge was run. However, there is no way to determine if the commits are purely a result of the merge. Changes manually made after the merge will go undetected. (related post: Diff transaction tree against another path/revision)

    Read the article

  • At what line in the following code should I be commiting my UnitOfWork ?

    - by Pure.Krome
    Hi folks, I have the following code which is in a transaction. I'm not sure where/when i should be commiting my unit of work. If someone knows where, can they please explain WHY they have said, where? (i'm trying to understand the pattern through example(s), as opposed to just getting my code to work). Here's what i've got :- using (TransactionScope transactionScope = new TransactionScope(TransactionScopeOption.RequiresNew, new TransactionOptions { IsolationLevel = IsolationLevel.ReadUncommitted })) { _logEntryRepository.InsertOrUpdate(logEntry); //_unitOfWork.Commit(); // Here, commit #1 ? // Now, if this log entry was a NewConnection or an LostConnection, then we need to make sure we update the ConnectedClients. if (logEntry.EventType == EventType.NewConnection) { _connectedClientRepository.Insert(new ConnectedClient { LogEntryId = logEntry.LogEntryId }); //_unitOfWork.Commit(); // Here, commit #2 ? } // A (PB) BanKick does _NOT_ register a lost connection .. so we need to make sure we handle those scenario's as a LostConnection. if (logEntry.EventType == EventType.LostConnection || logEntry.EventType == EventType.BanKick) { _connectedClientRepository.Delete(logEntry.ClientName, logEntry.ClientIpAndPort); //_unitOfWork.Commit(); // Here, commit #3 ? } _unitOfWork.Commit(); // Here, commit #4 ? transactionScope.Complete(); } Cheers :)

    Read the article

  • PHP: Aggregate Model Classes or Uber Model Classes?

    - by sunwukung
    In many of the discussions regarding the M in MVC, (sidestepping ORM controversies for a moment), I commonly see Model classes described as object representations of table data (be that an Active Record, Table Gateway, Row Gateway or Domain Model/Mapper). Martin Fowler warns against the development of an anemic domain model, i.e. a class that is nothing more than a wrapper for CRUD functionality. I've been working on an MVC application for a couple of months now. The DBAL in the application I'm working on started out simple (on account of my understanding - oh the benefits of hindsight), and is organised so that Controllers invoke Business Logic classes, that in turn access the database via DAO/Transaction Scripts pertinent to the task at hand. There are a few "Entity" classes that aggregate these DAO objects to provide a convenient CRUD wrapper, but also embody some of the "behaviour" of that Domain concept (for example, a user - since it's easy to isolate). Taking a look at some of the code, and thinking along refactoring some of the code into a Rich Domain Model, it occurred to me that were I to try and wrap the CRUD routines and behaviour of say, a Company into a single "Model" class, that would be a sizeable class. So, my question is this: do Models represent domain objects, business logic, service layers, all of the above combined? How do you go about defining the responsibilities for these components?

    Read the article

  • Query Level 2 Caching throwing ClassCastException

    - by Sameer Malhotra
    Hi, I am using JPA and Hibernate for the database. I have configured (EHCacache) second level cache and query level cache, but just to make sure that caching is working I was trying to get the statistics which is throwing class cast exception.Any help will be highly appreciated. My main goal is to see all the objects which have been cached to make sure that the caching is working properly. Here is the code: public List<CodeValue> findByCodetype(String propertyName) { try { final String queryString = "select model from CodeValue model where model.codetype" + "= :propertyValue" + " order by model.code"; Query query = em.createQuery(queryString); query.setHint("org.hibernate.cacheable", true); query.setHint("org.hibernate.cacheRegion", "query.findByCodetype"); query.setParameter("propertyValue", propertyName); List resultList = query.getResultList(); org.hibernate.Session session = (Session) em.getDelegate(); SessionFactory sessionFactory = session.getSessionFactory(); Map cacheEntries = sessionFactory.getStatistics() .getSecondLevelCacheStatistics("query.findByCodetype") .getEntries(); logger.info("The statistics are: " + cacheEntries); return resultList; } catch (RuntimeException re) { logger.error("findByCodetype failed in trauma patient", re); throw re; } } The error is existing right when I am trying to print the statistics. Below is exception: [6/7/10 19:23:17:059 GMT] 00000034 SystemOut O java.lang.ClassCastException: org.hibernate.cache.QueryKey incompatible with org.hibernate.cache.CacheKey at org.hibernate.stat.SecondLevelCacheStatistics.getEntries(SecondLevelCacheStatistics.java:51) at com.idph.trauma.registry.service.TraumaPatientDAO.findByCodetype(TraumaPatientDAO.java:439) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:64) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:615) at org.springframework.aop.support.AopUtils.invokeJoinpointUsingReflection(AopUtils.java:307) at org.springframework.aop.framework.ReflectiveMethodInvocation.invokeJoinpoint(ReflectiveMethodInvocation.java:182) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:149) at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:106) at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:171) at org.springframework.aop.framework.JdkDynamicAopProxy.invoke(JdkDynamicAopProxy.java:204) at $Proxy209.findByCodetype(Unknown Source) Do you know what's going on?

    Read the article

  • Action Filter Dependency Injection in ASP.NET MVC 3 RC2 with StructureMap

    - by Ben
    Hi, I've been playing with the DI support in ASP.NET MVC RC2. I have implemented session per request for NHibernate and need to inject ISession into my "Unit of work" action filter. If I reference the StructureMap container directly (ObjectFactory.GetInstance) or use DependencyResolver to get my session instance, everything works fine: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } However if I attempt to use my StructureMap filter provider (inherits FilterAttributeFilterProvider) I have problems with committing the NHibernate transaction at the end of the request. It is as if ISession objects are being shared between requests. I am seeing this frequently as all my images are loaded via an MVC controller so I get 20 or so NHibernate sessions created on a normal page load. I added the following to my action filter: ISession Session { get { return DependencyResolver.Current.GetService<ISession>(); } } public ISession SessionTest { get; set; } public override void OnResultExecuted(System.Web.Mvc.ResultExecutedContext filterContext) { bool sessionsMatch = (this.Session == this.SessionTest); SessionTest is injected using the StructureMap Filter provider. I found that on a page with 20 images, "sessionsMatch" was false for 2-3 of the requests. My StructureMap configuration for session management is as follows: For<ISessionFactory>().Singleton().Use(new NHibernateSessionFactory().GetSessionFactory()); For<ISession>().HttpContextScoped().Use(ctx => ctx.GetInstance<ISessionFactory>().OpenSession()); In global.asax I call the following at the end of each request: public Global() { EndRequest += (sender, e) => { ObjectFactory.ReleaseAndDisposeAllHttpScopedObjects(); }; } Is this configuration thread safe? Previously I was injecting dependencies into the same filter using a custom IActionInvoker. This worked fine until MVC 3 RC2 when I started experiencing the problem above, which is why I thought I would try using a filter provider instead. Any help would be appreciated Ben P.S. I'm using NHibernate 3 RC and the latest version of StructureMap

    Read the article

  • Turning a JSON list into a POJO

    - by Josh L
    I'm having trouble getting this bit of JSON into a POJO. I'm using Jackson configured like this: protected ThreadLocal<ObjectMapper> jparser = new ThreadLocal<ObjectMapper>(); public void receive(Object object) { try { if (object instanceof String && ((String)object).length() != 0) { ObjectDefinition t = null ; if (parserChoice==0) { if (jparser.get()==null) { jparser.set(new ObjectMapper()); } t = jparser.get().readValue((String)object, ObjectDefinition.class); } Object key = t.getKey(); if (key == null) return; transaction.put(key,t); } } catch (Exception e) { e.printStackTrace(); } } Here's the JSON that needs to be turned into a POJO: { "id":"exampleID1", "entities":{ "tags":[ { "text":"textexample1", "indices":[ 2, 14 ] }, { "text":"textexample2", "indices":[ 31, 36 ] }, { "text":"textexample3", "indices":[ 37, 43 ] } ] } And lastly, here's what I currently have for the java class: protected Entities entities; @JsonIgnoreProperties(ignoreUnknown = true) protected class Entities { public Entities() {} protected Tags tags; @JsonIgnoreProperties(ignoreUnknown = true) protected class Tags { public Tags() {} protected String text; public String getText() { return text; } public void setText(String text) { this.text = text; } }; public Tags getTags() { return tags; } public void setTags(Tags tags) { this.tags = tags; } }; //Getters & Setters ... I've been able to translate the more simple objects into a POJO, but the list has me stumped. Any help is appreciated. Thanks!

    Read the article

  • Using Rails, problem testing has_many relationship

    - by east
    The summary is that I've code that works when manually testing, but isn't doing what I would think it should when trying to build an automated test. Here are the details: I've two models: Payment and PaymentTranscation. class Payment ... has_many :transactions, :class_name => 'PaymentTransaction' class PaymentTranscation ... belongs_to payment The PaymentTransaction is only created in a Payment model method, like so: def pay_up ... transactions.create!(params...) ... end I've manually tested this code, inspected the database, and everything works well. The failing automated test looks like this: def test_pay_up purchase = Payment.new(...) assert purchase.save assert_equal purchase.state, :initialized.to_s assert purchase.pay_up # this should create a new PaymentTransaction... assert_equal purchase.state, :succeeded.to_s assert_equal purchase.transactions.count, 1 # FAILS HERE; transactions is an empty array end If I step through the code, it's clear that the PaymentTransaction is getting created correctly (though I can't see it in the database because everything is in a testing transaction). What I can't figure out is why transactions is returning an empty array in the test when I know a valid PaymentTransaction is getting created. Anybody have some suggestions? Thanks in advance, east

    Read the article

  • Error handling in PHP

    - by Industrial
    Hi guys, We're building a PHP app based on Good old MVC (codeigniter framework) and have run into trouble with a massive chained action that consists of multiple model calls, that together is a part of a big transaction to the database. We want to be able to do a list of actions and get a status report of each one back from the function, whatever the outcome is. Our first initial idea was to utilize the exceptions of PHP5, but since we want to also need status messages that doesnt break the execution of the script, this was our solution that we came up with. It goes a little something like this: $sku = $this->addSku( $name ); if ($sku === false) { $status[] = 'Something gone terrible wrong'; $this->db->trans_rollback(); return $status; } $image= $this->addImage( $filename); if ($image=== false) { $error[] = 'Image could not be uploaded, check filesize'; $this->db->trans_rollback(); return $status; } Our controller looks like this: $var = $this->products->addProductGroup($array); if (is_array($var)) { foreach ($var as $error) { echo $error . '<br />'; } } It appears to be a very fragile solution to do what we need, but it's neither scalable, neither effective when compared to pure PHP exceptions for instance. Is this really the way that this kind of stuff generally is handled in MVC based apps? Thanks!

    Read the article

  • How to design a RESTful collection resource?

    - by Suresh Kumar
    I am trying to design a "collection of items" resource. I need to support the following operations: Create the collection Remove the collection Add a single item to the collection Add multiple items to the collection Remove a single item from the collection Remove multiple items from the collection This is as far as I have gone: Create collection: ==> POST /service Host: www.myserver.com Content-Type: application/xml <collection name="items"> <item href="item1"/> <item href="item2"/> <item href="item3"/> </collection> <== 201 Created Location: http://myserver.com/service/items Content-Type: application/xml ... Remove collection: ==> DELETE /service/items <== 200 OK Removing a single item from the collection: ==> DELETE /service/items/item1 <== 200 OK However, I am finding supporting the other operations a bit tricky i.e. what methods can I use to: Add single or multiple items to the collection. (PUT doesn't seem to be right here as per HTTP 1.1 RFC Remove multiple items from the collection in one transaction. (DELETE doesn't seem to right here either)

    Read the article

  • Why do System.IO.Log SequenceNumbers have variable length?

    - by Doug McClean
    I'm trying to use the System.IO.Log features to build a recoverable transaction system. I understand it to be implemented on top of the Common Log File System. The usual ARIES approach to write-ahead logging involves persisting log record sequence numbers in places other than the log (for example, in the header of the database page modified by the logged action). Interestingly, the documentation for CLFS says that such sequence numbers are always 64-bit integers. Confusingly, however, the .Net wrapper around those SequenceNumbers can be constructed from a byte[] but not from a UInt64. It's value can also be read as a byte[], but not as a UInt64. Inspecting the implementation of SequenceNumber.GetBytes() reveals that it can in fact return arrays of either 8 or 16 bytes. This raises a few questions: Why do the .Net sequence numbers differ in size from the CLFS sequence numbers? Why are the .Net sequence numbers variable in length? Why would you need 128 bits to represent such a sequence number? It seems like you would truncate the log well before using up a 64-bit address space (16 exbibytes, or around 10^19 bytes, more if you address longer words)? If log sequence numbers are going to be represented as 128 bit integers, why not provide a way to serialize/deserialize them as pairs of UInt64s instead of rather-pointlessly incurring heap allocations for short-lived new byte[]s every time you need to write/read one? Alternatively, why bother making SequenceNumber a value type at all? It seems an odd tradeoff to double the storage overhead of log sequence numbers just so you can have an untruncated log longer than a million terabytes, so I feel like I'm missing something here, or maybe several things. I'd much appreciate it if someone in the know could set me straight.

    Read the article

  • When should I use indexed arrays of OpenGL vertices?

    - by Tartley
    I'm trying to get a clear idea of when I should be using indexed arrays of OpenGL vertices, drawn with gl[Multi]DrawElements and the like, versus when I should simply use contiguous arrays of vertices, drawn with gl[Multi]DrawArrays. (Update: The consensus in the replies I got is that one should always be using indexed vertices.) I have gone back and forth on this issue several times, so I'm going to outline my current understanding, in the hopes someone can either tell me I'm now finally more or less correct, or else point out where my remaining misunderstandings are. Specifically, I have three conclusions, in bold. Please correct them if they are wrong. One simple case is if my geometry consists of meshes to form curved surfaces. In this case, the vertices in the middle of the mesh will have identical attributes (position, normal, color, texture coord, etc) for every triangle which uses the vertex. This leads me to conclude that: 1. For geometry with few seams, indexed arrays are a big win. Follow rule 1 always, except: For geometry that is very 'blocky', in which every edge represents a seam, the benefit of indexed arrays is less obvious. To take a simple cube as an example, although each vertex is used in three different faces, we can't share vertices between them, because for a single vertex, the surface normals (and possible other things, like color and texture co-ord) will differ on each face. Hence we need to explicitly introduce redundant vertex positions into our array, so that the same position can be used several times with different normals, etc. This means that indexed arrays are of less use. e.g. When rendering a single face of a cube: 0 1 o---o |\ | | \ | | \| o---o 3 2 (this can be considered in isolation, because the seams between this face and all adjacent faces mean than none of these vertices can be shared between faces) if rendering using GL_TRIANGLE_FAN (or _STRIP), then each face of the cube can be rendered thus: verts = [v0, v1, v2, v3] colors = [c0, c0, c0, c0] normal = [n0, n0, n0, n0] Adding indices does not allow us to simplify this. From this I conclude that: 2. When rendering geometry which is all seams or mostly seams, when using GL_TRIANGLE_STRIP or _FAN, then I should never use indexed arrays, and should instead always use gl[Multi]DrawArrays. (Update: Replies indicate that this conclusion is wrong. Even though indices don't allow us to reduce the size of the arrays here, they should still be used because of other performance benefits, as discussed in the comments) The only exception to rule 2 is: When using GL_TRIANGLES (instead of strips or fans), then half of the vertices can still be re-used twice, with identical normals and colors, etc, because each cube face is rendered as two separate triangles. Again, for the same single cube face: 0 1 o---o |\ | | \ | | \| o---o 3 2 Without indices, using GL_TRIANGLES, the arrays would be something like: verts = [v0, v1, v2, v2, v3, v0] normals = [n0, n0, n0, n0, n0, n0] colors = [c0, c0, c0, c0, c0, c0] Since a vertex and a normal are often 3 floats each, and a color is often 3 bytes, that gives, for each cube face, about: verts = 6 * 3 floats = 18 floats normals = 6 * 3 floats = 18 floats colors = 6 * 3 bytes = 18 bytes = 36 floats and 18 bytes per cube face. (I understand the number of bytes might change if different types are used, the exact figures are just for illustration.) With indices, we can simplify this a little, giving: verts = [v0, v1, v2, v3] (4 * 3 = 12 floats) normals = [n0, n0, n0, n0] (4 * 3 = 12 floats) colors = [c0, c0, c0, c0] (4 * 3 = 12 bytes) indices = [0, 1, 2, 2, 3, 0] (6 shorts) = 24 floats + 12 bytes, and maybe 6 shorts, per cube face. See how in the latter case, vertices 0 and 2 are used twice, but only represented once in each of the verts, normals and colors arrays. This sounds like a small win for using indices, even in the extreme case of every single geometry edge being a seam. This leads me to conclude that: 3. When using GL_TRIANGLES, one should always use indexed arrays, even for geometry which is all seams. Please correct my conclusions in bold if they are wrong.

    Read the article

  • How to have a where clause on an insert or an update in Linq to Sql?

    - by Kelsey
    I am trying to convert the following stored proc to a LinqToSql call (this is a simplied version of the SQL): INSERT INTO [MyTable] ([Name], [Value]) SELECT @name, @value WHERE NOT EXISTS(SELECT [Value] FROM [MyTable] WHERE [Value] = @value) The DB does not have a constraint on the field that is getting checked for so in this specific case the check needs to be made manually. Also there are many items constantly being inserted as well so I need to make sure that when this specific insert happens there is no dupe of the value field. My first hunch is to do the following: using (TransactionScope scope = new TransactionScope()) { if (Context.MyTables.SingleOrDefault(t => t.Value == in.Value) != null) { MyLinqModels.MyTable t = new MyLinqModels.MyTable() { Name = in.Name, Value = in.Value }; // Do some stuff in the transaction scope.Complete(); } } This is the first time I have really run into this scenario so I want to make sure I am going about it the right way. Does this seem correct or can anyone suggest a better way of going about it without having two seperate calls? Edit: I am running into a similar issue with an update: UPDATE [AnotherTable] SET [Code] = @code WHERE [ID] = @id AND [Code] IS NULL How would I do the same check with Linqtosql? I assume I need to do a get and then set all the values and submit but what if someone updates [Code] to something other than null from the time I do the get to when the update executes? Same problem as the insert...

    Read the article

  • How do i pass arbitary date format from C# to sql backend

    - by Jims
    I have a datetime field for the transaction date in the back end. So I am passing that date from front C#.net, in the below format: 2011-01-01 12:17:51.967 to do this I have written: presentation layer: string date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); PropertyClass prp=new PropertyClass(); Prp.TransDate=Convert.ToDateTime(date); PropertyClass structure: Public class property { private DateTime transdate; public DateTime TransDate { get { return transdate; } set { transdate = value; } } } From DAL layer passing the TransactionDate like this: Cmd.Parameters.AddWithValue("@TranSactionDate”, SqlDbType.DateTime).value=propertyobj.TransDate; While debugging from presntation layer: string date = DateTime.Now.ToString("yyyy-MM-dd HH:mm:ss.fff", CultureInfo.InvariantCulture); in this I am getting correct expected date format, but when debugs goes to this line Prp.TransDate=Convert.ToDateTime(date); again date format changing to 1/1/2011. But my backend sql datefield wants the date paramter 2011-01-01 12:17:51.967 in this format otherwise throwing exception invalid date format. Note: While passing date as string without converting to datetime getting exceptions like: System.Data.SqlTypes.SqlTypeException: SqlDateTime overflow. Must be between 1/1/1753 12:00:00 AM and 12/31/9999 11:59:59 PM. at System.Data.SqlTypes.SqlDateTime.FromTimeSpan(TimeSpan value) at System.Data.SqlTypes.SqlDateTime.FromDateTime(DateTime value) at System.Data.SqlTypes.SqlDateTime..ctor(DateTime value) at System.Data.SqlClient.MetaType.FromDateTime(DateTime dateTime, Byte cb) at System.Data.SqlClient.TdsParser.WriteValue(Object value, MetaType type, Byte scale, Int32 actualLength, Int32 encodingByteSize, Int32 offset, TdsParserStateObject stateObj) at System.Data.SqlClient.TdsParser.TdsExecuteRPC(_SqlRPC[] rpcArray, Int32 timeout, Boolean inSchema, SqlNotificationRequest notificationRequest, TdsParserStateObject stateObj, Boolean isCommandProc)

    Read the article

  • ajax to populate an input type text

    - by kawtousse
    hi, I have an input type text that i want to populate it with a value from data base using the ajax technique. first i define my text zone like the following: <td><input type=text id='st' value=" " name='stname' onclick="donnom();" /></td> in javascript i do the following: xhr5.onreadystatechange = function(){ if(xhr5.readyState == 4 && xhr5.status == 200) { selects5 = xhr5.responseText; // On se sert de innerHTML pour rajouter les options a la liste document.getElementById('st').innerHTML = selects5; } }; xhr5.open("POST","ajaxIDentifier5.jsp",true); xhr5.setRequestHeader('Content-Type','application/x-www-form-urlencoded'); id=document.getElementById(idIdden).value; xhr5.send("id="+id); in IDentifier5.jsp i put the next code: '<%String id=request.getParameter("id"); System.out.println("idDailyTimeSheet ajaxIDentifier5 as is:"+id); Session s = null; Transaction tx; try { s= HibernateUtil.currentSession(); tx=s.beginTransaction(); Query query = s.createQuery("select from Dailytimesheet dailytimesheet where dailytimesheet.IdDailyTimeSheet="+id+" " ); for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()) { Dailytimesheet object=(Dailytimesheet)it.next(); out.print( "<input type=\"text\" id=\"st1\" value=\""+object.getTimeFrom()+"\" name=\"starting\" onclick=\"donnom()\" ></input>"); } } }catch (HibernateException e) { e.printStackTrace();} %> i want to get only the value in the input type text populated from database because after that i will be able to change it . thanks for help.

    Read the article

  • delete row from result set in web sql with javascript

    - by Kaijin
    I understand that the result set from web sql isn't quite an array, more of an object? I'm cycling through a result set and to speed things up I'd like to remove a row once it's been found. I've tried "delete" and "splice", the former does nothing and the latter throws an error. Here's a piece of what I'm trying to do, notice the delete on line 18: function selectFromReverse(reverseRay,suggRay){ var reverseString = reverseRay.toString(); db.transaction(function (tx) { tx.executeSql('SELECT votecount, comboid FROM counterCombos WHERE comboid IN ('+reverseString+') AND votecount>0', [], function(tx, results){ processSelectFromReverse(results,suggRay); }); }, function(){onError}); } function processSelectFromReverse(results,suggRay){ var i = suggRay.length; while(i--){ var j = results.rows.length; while(j--){ console.log('searching'); var found = 0; if(suggRay[i].reverse == results.rows.item(j).comboid){ delete results.rows.item(j); console.log('found'); found++; break; } } if(found == 0){ console.log('lost'); } } }

    Read the article

  • I find a problem with sending receiving parameter

    - by kawtousse
    how to get the xml translation to html dropdownlist with ajax. I send the parameter with GET method but the JSP FILE THAT GENERATES THE XML DONT RECEIVE IT. var url="responsexml.jsp"; url=url+"?projectCode="+prj.options[prj.selectedIndex].value; xmlhttp.onreadystatechange=stateChanged; xmlhttp.open("GET",url,true); xmlhttp.send(null); and then in responsexml.jsp I do like that: <% String projectcode= (String)request.getParameter("projectCode"); System.out.println("++++projectCode:=" +projectcode); Session s = null; Transaction tx; try { s = HibernateUtil.currentSession(); tx = s.beginTransaction(); Query query = s.createQuery("SELECT from Wa wa where wa.ProjectCode='"+projectcode+"'"); response.setContentType("text/xml"); PrintWriter output = response.getWriter(); output.write( "<?xml version=\"1.0\" encoding=\"utf-8\"?>"); //response.setHeader("Cache-Control", "no-cache"); //constriure le xml if(projectcode!=null) { for(Iterator it=query.iterate();it.hasNext();) { if(it.hasNext()) { Wa object=(Wa)it.next(); //out.print( "<item id=\"" +object.getIdWA() + "\" name=\"" + object.getWAName() + "\" />"); output.write("<wa>"); output.write( "<item id=\"" + object.getIdWA() + "\" name=\"" + object.getWAName() + "\" />"); output.write("</wa>"); } } } } catch (HibernateException e) { e.printStackTrace(); } %> </body> </html> With this code I dont have my xml file. I got this error: The server did not understand the request, or the request was invalid. Erreur de traitement de la ressource http://www.w3.o... PLEASE HELP.

    Read the article

  • .Net Entity objectcontext thread error

    - by Chris Klepeis
    I have an n-layered asp.net application which returns an object from my DAL to the BAL like so: public IEnumerable<SourceKey> Get(SourceKey sk) { var query = from SourceKey in _dataContext.SourceKeys select SourceKey; if (sk.sourceKey1 != null) { query = from SourceKey in query where SourceKey.sourceKey1 == sk.sourceKey1 select SourceKey; } return query.AsEnumerable(); } This result passes through my business layer and hits the UI layer to display to the end users. I do not lazy load to prevent query execution in other layers of my application. I created another function in my DAL to delete objects: public void Delete(SourceKey sk) { try { _dataContext.DeleteObject(sk); _dataContext.SaveChanges(); } catch (Exception ex) { Debug.WriteLine(ex.Message + " " + ex.StackTrace + " " + ex.InnerException); } } When I try to call "Delete" after calling the "Get" function, I receive this error: New transaction is not allowed because there are other threads running in the session This is an ASP.Net app. My DAL contains an entity data model. The class in which I have the above functions share the same _dataContext, which is instantiated in my constructor. My guess is that the reader is still open from the "Get" function and was not closed. How can I close it?

    Read the article

  • PayPal IPN "FAIL"

    - by Rob
    Hey all... I'm trying to figure out how to use PayPal's IPN and I've run into a wall. I want a buyer to be forwarded to a success page after making a purchase, and I want that page to show the details of their transaction. I choose IPN instead of the PDT because I also want to do some other behind the scenes stuff with their data. Anyway, here's the code I'm using -- I'm testing in sandbox mode -- but it returns "FAIL" every time. $req = 'cmd=_notify-validate'; foreach ($_POST as $key => $value) { $value = urlencode(stripslashes($value)); $req .= "&$key=$value"; } // post back to PayPal system to validate $header = "POST /cgi-bin/webscr HTTP/1.0\r\n"; $header .= "Content-Type: application/x-www-form-urlencoded\r\n"; $header .= "Content-Length: " . strlen($req) . "\r\n\r\n"; $fp = fsockopen ('www.sandbox.paypal.com', 80, $errno, $errstr, 30); if (!$fp) { // HTTP ERROR } else { fputs ($fp, $header . $req); while (!feof($fp)) { $res = fgets ($fp, 1024); if (strcmp ($res, "VERIFIED") == 0) { // PAYMENT VALIDATED & VERIFIED! echo "Validated!"; } else if (strcmp ($res, "INVALID") == 0) { // PAYMENT INVALID & INVESTIGATE MANUALY! echo "Invalid!"; } } fclose ($fp); }

    Read the article

  • SQL Server: preventing dirty reads in a stored procedure

    - by pcampbell
    Consider a SQL Server database and its two stored procs: *1. A proc that performs 3 important things in a transaction: Create a customer, call a sproc to perform another insert, and conditionally insert a third record with the new identity. BEGIN TRAN INSERT INTO Customer(CustName) (@CustomerName) SELECT @NewID = SCOPE_IDENTITY() EXEC CreateNewCustomerAccount @NewID, @CustomerPhoneNumber IF @InvoiceTotal > 100000 INSERT INTO PreferredCust(InvoiceTotal, CustID) VALUES (@InvoiceTotal, @NewID) COMMIT TRAN *2. A stored proc which polls the Customer table for new entries that don't have a related PreferredCust entry. The client app performs the polling by calling this stored proc every 500ms. A problem has arisen where the polling stored procedure has found an entry in the Customer table, and returned it as part of its results. The problem was that it has picked up that record, I am assuming, as part of a dirty read. The record ended up having an entry in PreferredCust later, and ended up creating a problem downstream. Question How can you explicitly prevent dirty reads by that second stored proc? The environment is SQL Server 2005 with the default configuration out of the box. No other locking hits are given in either of these stored procedures.

    Read the article

  • How to get resultset with stored procedure calls over two linked servers?

    - by räph
    I have problems filling a temporary table with the resultset from a procedure call on a linked server, in which again a procedure on another server is called. I have a Stored Procedure sproc1 with the following code, which calls another procedure sproc2 on a linked server. SET @sqlCommand = 'INSERT INTO #tblTemp ( ModuleID, ParamID) ' + '( SELECT * FROM OPENQUERY(' + @targetServer + ', ' + '''SET FMTONLY OFF; EXEC ' + @targetDB + '.usr.sproc2 ' + @param + ''' ) )' exec ( @sqlCommand ) Now in the called sproc2 I again call a third procedure sproc3 on another linked server, which returns my resultset. SET @sqlCommand = 'EXEC ' + @targetServer +'.database.usr.sproc3 ' + @param exec ( @sqlCommand ) The whole thing doen't work, as I get an SQL error 7391 The operation could not be performed because OLE DB provider "%ls" for linked server "%ls" was unable to begin a distributed transaction. I already checked the hints at this microsoft article, but without success. But maybe, I can change the code in sproc1. Would there be some alternative to the temp table and the open query? Just calling stored procedures from server A to B to C and returning the resultset is working (I do this often in the application). But this special case with the temp table and openquery doesn't work! Or is it just not possible what I am trying to do? The microsft article states: Check the object you refer on the destination server. If it is a view or a stored procedure, or causes an execution of a trigger, check whether it implicitly references another server. If so, the third server is the source of the problem. Run the query directly on the third server. If you cannot run the query directly on the third server, the problem is not actually with the linked server query. Resolve the underlying problem first. Is this my case? PS: I can't avoid the architecture with the three servers.

    Read the article

  • Modelling deterministic and nondeterministic data separately

    - by Superstringcheese
    I'm working with the Microsoft ADO.NET Entity Framework for a game project. Following the advice of other posters on SO, I'm considering modelling deterministic and nondeterministic data separately. The idea for this came from a discussion on multiplayer games, but it seemed to make sense in a single-player scenario as well. Deterministic (things that aren't going to change during gameplay) Attributes (Strength, Agility, etc.) and their descriptions Skills and their descriptions and requirements Races, Factions, Equipment, etc. Base Attribute/Skill/Equipment loadouts for monsters Nondeterministic (things that will change a lot during gameplay) Beings' current AttributeModifers (Potion of Might = +10 Strength), current health and mana, etc. Player inventory, cash, experience, level Player quests states Player FactionRelationships ...and so on. My deterministic model would serve as a set of constants. My nondeterministic model would provide my on-the-fly operable data and would be serialized to a savegame file to maintain game state between play sessions. The data store will be an embedded SQL Compact database. So I might want to create relations between my Attributes table (deterministic model) and my BeingAttributeModifiers table (nondeterministic model), but how do I set that up across models? Det model/db Nondet model/db ____________ ________________________ |Attributes | |PlayerAttributeModifiers| |------------| |------------------------| |Id | |Id | |Name | |AttributeId | |Description | |SourceId | ------------ |Value | ------------------------ Should I use two separate models (edmx) that transact with a single database containing both deterministic-type and nondeterministic-type tables? Or should/can I use two separate databases in one model? Or two models each with their own database? With distinct models/dbs it seems like this will get really complicated and I'll end up fighting EF a lot, rolling my own transaction code, and generally losing out on a lot of the advantages of the framework. I know these are vague questions, I'm just looking for a sanity check before I forge ahead any further.

    Read the article

  • mysql and trigger usage question

    - by dhruvbird
    I have a situation in which I don't want inserts to take place (the transaction should rollback) if a certain condition is met. I could write this logic in the application code, but say for some reason, it has to be written in MySQL itself (say clients written in different languages will be inserting into this MySQL InnoDB table) [that's a separate discussion]. Table definition: CREATE TABLE table1(x int NOT NULL); The trigger looks something like this: CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN NEW.x = NULL; END IF; END; I am guessing it could also be written as(untested): CREATE TRIGGER t1 BEFORE INSERT ON table1 FOR EACH ROW IF (condition) THEN ROLLBACK; END IF; END; But, this doesn't work: CREATE TRIGGER t1 BEFORE INSERT ON table1 ROLLBACK; You are guaranteed that: Your DB will always be MySQL Table type will always be InnoDB That NOT NULL column will always stay the way it is Question: Do you see anything objectionable in the 1st method?

    Read the article

< Previous Page | 87 88 89 90 91 92 93 94 95 96 97 98  | Next Page >