Search Results

Search found 43474 results on 1739 pages for 'configuration database'.

Page 1603/1739 | < Previous Page | 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610  | Next Page >

  • Can I perform some processing on the POST data before ASP.NET MVC UpdateModel happens?

    - by Domenic
    I would like to strip out non-numeric elements from the POST data before using UpdateModel to update the copy in the database. Is there a way to do this? // TODO: it appears I don't even use the parameter given at all, and all the magic // happens via UpdateModel and the "controller's current value provider"? [HttpPost] public ActionResult Index([Bind(Include="X1, X2")] Team model) // TODO: stupid magic strings { if (this.ModelState.IsValid) { TeamContainer context = new TeamContainer(); Team thisTeam = context.Teams.Single(t => t.TeamId == this.CurrentTeamId); // TODO HERE: apply StripWhitespace() to the data before using UpdateModel. // The data is currently somewhere in the "current value provider"? this.UpdateModel(thisTeam); context.SaveChanges(); this.RedirectToAction(c => c.Index()); } else { this.ModelState.AddModelError("", "Please enter two valid Xs."); } // If we got this far, something failed; redisplay the form. return this.View(model); } Sorry for the terseness, up all night working on this; hopefully my question is clear enough? Also sorry since this is kind of a newbie question that I might be able to get with a few hours of documentation-trawling, but I'm time-pressured... bleh.

    Read the article

  • modifying ajax returned data before showing

    - by Nick
    I'm processing subscribtion form with jQuery/ajax and need to display results with success function (they are different depending on if email exists in database). But the trick is I do not need h2 and first "p" tag. How can I show only div#alertmsg and second "p" tag? I've tried revoming unnecessary elements with method described here, but it didn't work. Thanks in advance. Here is my code: var dataString = $('#subscribe').serialize(); $.ajax({ type: "POST", url: "/templates/willrock/pommo/user/process.php", data: dataString, success: function(data){ var result = $(data); $('#success').append(result); } Here is the data returned: <h2>Subscription Review</h2> <p><a href="url" onClick="history.back(); return false;"><img src="/templates/willrock/pommo/themes/shared/images/icons/back.png" alt="back icon" class="navimage" /> Back to Subscription Form</a></p> <div id="alertmsg" class="error"> <ul> <li>Email address already exists. Duplicates are not allowed.</li> </ul> </div> <p><a href="login.php">Update your records</a></p>

    Read the article

  • Multiple WCF calls for a single ASP.NET page load

    - by Rodney Burton
    I have an existing asp.net web application I am redesigning to use a service architecture. I have the beginnings of an WCF service which I am able to call and perform functions with no problems. As far as updating data, it all makes sense. For example, I have a button that says Submit Order, it sends the data to the service, which does the processing. Here's my concern: If I have an ASP.NET page that shows me a list of orders (View Orders page), and at the top I have a bunch of drop down lists for order types, and other search criteria which is populated by querying different tables from the database (lookup tables, etc). I am hoping to eventually completely decouple the web application from the DB, and use data contracts to pass information between the BLL, the SOA, and the web app. With that said, how can I reduce the # of WCF calls needed to load my "View Orders" page? I would need to make 1 call get the list of orders, and 1 call for each drop down list, etc because those are populated by individual functions in my BLL. Is it good architecture to create a web service method that returns back a specialized data contract that consists of everything you would need to display a View Orders page, in 1 shot? Something like this pseudocode: public class ViewOrderPageDTO { public OrderDTO[] Orders { get; set; } public OrderTypesDTO[] OrderTypes { get; set; } public OrderStatusesDTO[] OrderStatuses { get; set; } public CustomerListDTO[] CustomerList { get; set; } } Or is it better practice in the page_load event to make 5 or 6 or even 15 individual calls to the SOA to get the data needed to load the page? Therefore, bypassing the need for specialized wcf methods or DTO's that conglomerate other DTO? Thanks for your input and suggestions.

    Read the article

  • MySQL BinLog Statement Retrieval

    - by Jonathon
    I have seven 1G MySQL binlog files that I have to use to retrieve some "lost" information. I only need to get certain INSERT statements from the log (ex. where the statement starts with "INSERT INTO table SET field1="). If I just run a mysqlbinlog (even if per database and with using --short-form), I get a text file that is several hundred megabytes, which makes it almost impossible to then parse with any other program. Is there a way to just retrieve certain sql statements from the log? I don't need any of the ancillary information (timestamps, autoincrement #s, etc.). I just need a list of sql statements that match a certain string. Ideally, I would like to have a text file that just lists those sql statements, such as: INSERT INTO table SET field1='a'; INSERT INTO table SET field1='tommy'; INSERT INTO table SET field1='2'; I could get that by running mysqlbinlog to a text file and then parsing the results based upon a string, but the text file is way too big. It just times out any script I run and even makes it impossible to open in a text editor. Thanks for your help in advance.

    Read the article

  • Lucene and Special Characters

    - by Brandon
    I am using Lucene.Net 2.0 to index some fields from a database table. One of the fields is a 'Name' field which allows special characters. When I perform a search, it does not find my document that contains a term with special characters. I index my field as such: Directory DALDirectory = FSDirectory.GetDirectory(@"C:\Indexes\Name", false); Analyzer analyzer = new StandardAnalyzer(); IndexWriter indexWriter = new IndexWriter(DALDirectory, analyzer, true, IndexWriter.MaxFieldLength.UNLIMITED); Document doc = new Document(); doc.Add(new Field("Name", "Test (Test)", Field.Store.YES, Field.Index.TOKENIZED)); indexWriter.AddDocument(doc); indexWriter.Optimize(); indexWriter.Close(); And I search doing the following: value = value.Trim().ToLower(); value = QueryParser.Escape(value); Query searchQuery = new TermQuery(new Term(field, value)); Searcher searcher = new IndexSearcher(DALDirectory); TopDocCollector collector = new TopDocCollector(searcher.MaxDoc()); searcher.Search(searchQuery, collector); ScoreDoc[] hits = collector.TopDocs().scoreDocs; If I perform a search for field as 'Name' and value as 'Test', it finds the document. If I perform the same search as 'Name' and value as 'Test (Test)', then it does not find the document. Even more strange, if I remove the QueryParser.Escape line do a search for a GUID (which, of course, contains hyphens) it finds documents where the GUID value matches, but performing the same search with the value as 'Test (Test)' still yields no results. I am unsure what I am doing wrong. I am using the QueryParser.Escape method to escape the special characters and am storing the field and searching by the Lucene.Net's examples. Any thoughts?

    Read the article

  • How can I add a "Loading...Please wait..." feature?

    - by Rob
    I have a very simple table on my website, that displays different URL's. I have an input field where I can type in a URL and click 'Submit' to add additional URL's. However, I want to add an MD5 grabbing feature to this, using @md5_file(); to grab the MD5 of the URL and check to make sure it's the MD5 it should be, before adding it to the database. However it may take a few seconds for it to grab the MD5 and compare it, so I would like to add a little bit of text, like "Processing...Please wait..." while it does the comparing, and then once it's compared I want that text to go away. I've never done this before, or even though about doing it so I have no idea where to start. I'll go ahead and put javascript as a tag for this, since I'm guessing it would be done with javascript, but I really have no idea. I don't think it's possible with PHP, but again, I have no idea. Any suggestions?

    Read the article

  • T-SQL - Left Outer Joins - Fileters in the where clause versus the on clause.

    - by Greg Potter
    I am trying to compare two tables to find rows in each table that is not in the other. Table 1 has a groupby column to create 2 sets of data within table one. groupby number ----------- ----------- 1 1 1 2 2 1 2 2 2 4 Table 2 has only one column. number ----------- 1 3 4 So Table 1 has the values 1,2,4 in group 2 and Table 2 has the values 1,3,4. I expect the following result when joining for Group 2: `Table 1 LEFT OUTER Join Table 2` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- 2 2 NULL `Table 2 LEFT OUTER Join Table 1` T1_Groupby T1_Number T2_Number ----------- ----------- ----------- NULL NULL 3 The only way I can get this to work is if I put a where clause for the first join: PRINT 'Table 1 LEFT OUTER Join Table 2, with WHERE clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table1 LEFT OUTER join table2 --****************************** on table1.number = table2.number --****************************** WHERE table1.groupby = 2 AND table2.number IS NULL and a filter in the ON for the second: PRINT 'Table 2 LEFT OUTER Join Table 1, with ON clause' select table1.groupby as [T1_Groupby], table1.number as [T1_Number], table2.number as [T2_Number] from table2 LEFT OUTER join table1 --****************************** on table2.number = table1.number AND table1.groupby = 2 --****************************** WHERE table1.number IS NULL Can anyone come up with a way of not using the filter in the on clause but in the where clause? The context of this is I have a staging area in a database and I want to identify new records and records that have been deleted. The groupby field is the equivalent of a batchid for an extract and I am comparing the latest extract in a temp table to a the batch from yesterday stored in a partioneds table, which also has all the previously extracted batches as well. Code to create table 1 and 2: create table table1 (number int, groupby int) create table table2 (number int) insert into table1 (number, groupby) values (1, 1) insert into table1 (number, groupby) values (2, 1) insert into table1 (number, groupby) values (1, 2) insert into table2 (number) values (1) insert into table1 (number, groupby) values (2, 2) insert into table2 (number) values (3) insert into table1 (number, groupby) values (4, 2) insert into table2 (number) values (4)

    Read the article

  • SSIS: "Failure inserting into the read-only column <ColumnName>"

    - by Cory
    I have an Excel source going into an OLE DB destination. I'm inserting data into a view that has an INSTEAD OF trigger that handles all inserts. When I try to execute the package I receive this error: "Failure inserting into the read-only column ColumnName" What can I do to let SSIS know that this view is safe to insert into because there is an INSTEAD OF trigger that will handle the insert? EDIT (Additional info): Some more additional info. I have a flat file that is being inserted into a normalized database. My initial problem was how do I take a flat file and insert that data into multiple tables while keeping track of all the primary/foreign key relationships. My solution was to create a VIEW that mimicked the structure of the flat file and then create an INSTEAD OF trigger on that view. In my INSTEAD OF trigger I would handle the logic of maintaining all the relationships between tables My view looks something like this. CREATE VIEW ImportView AS SELECT CONVERT(varchar(100, NULL) AS CustomerName, CONVERT(varchar(100), NULL) AS Address1, CONVERT(varchar(100), NULL) AS Address2, CONVERT(varchar(100), NULL) AS City, CONVERT(char(2), NULL) AS State, CONVERT(varchar(250), NULL) AS ItemOrdered, CONVERT(int, NULL) AS QuantityOrdered ... I will never need to select from this view, I only use it to insert data into it from this flat file I receive. I need someway to tell SQL Server that the fields aren't really read only because there is an INSTEAD OF trigger on this view.

    Read the article

  • How does this code break the Law of Demeter?

    - by Dave Jarvis
    The following code breaks the Law of Demeter: public class Student extends Person { private Grades grades; public Student() { } /** Must never return null; throw an appropriately named exception, instead. */ private synchronized Grades getGrades() throws GradesException { if( this.grades == null ) { this.grades = createGrades(); } return this.grades; } /** Create a new instance of grades for this student. */ protected Grades createGrades() throws GradesException { // Reads the grades from the database, if needed. // return new Grades(); } /** Answers if this student was graded by a teacher with the given name. */ public boolean isTeacher( int year, String name ) throws GradesException, TeacherException { // The method only knows about Teacher instances. // return getTeacher( year ).nameEquals( name ); } private Grades getGradesForYear( int year ) throws GradesException { // The method only knows about Grades instances. // return getGrades().getForYear( year ); } private Teacher getTeacher( int year ) throws GradesException, TeacherException { // This method knows about Grades and Teacher instances. A mistake? // return getGradesForYear( year ).getTeacher(); } } public class Teacher extends Person { public Teacher() { } /** * This method will take into consideration first name, * last name, middle initial, case sensitivity, and * eventually it could answer true to wild cards and * regular expressions. */ public boolean nameEquals( String name ) { return getName().equalsIgnoreCase( name ); } /** Never returns null. */ private synchronized String getName() { if( this.name == null ) { this.name == ""; } return this.name; } } Questions How is the LoD broken? Where is the code breaking the LoD? How should the code be written to uphold the LoD?

    Read the article

  • Shouldn't prepared statements be much faster?

    - by silversky
    $s = explode (" ", microtime()); $s = $s[0]+$s[1]; $con = mysqli_connect ('localhost', 'test', 'pass', 'db') or die('Err'); for ($i=0; $i<1000; $i++) { $stmt = $con -> prepare( " SELECT MAX(id) AS max_id , MIN(id) AS min_id FROM tb "); $stmt -> execute(); $stmt->bind_result($M,$m); $stmt->free_result(); $rand = mt_rand( $m , $M ).'<br/>'; $res = $con -> prepare( " SELECT * FROM tb WHERE id >= ? LIMIT 0,1 "); $res -> bind_param("s", $rand); $res -> execute(); $res->free_result(); } $e = explode (" ", microtime()); $e = $e[0]+$e[1]; echo number_format($e-$s, 4, '.', ''); // and: $link = mysql_connect ("localhost", "test", "pass") or die (); mysql_select_db ("db") or die ("Unable to select database".mysql_error()); for ($i=0; $i<1000; $i++) { $range_result = mysql_query( " SELECT MAX(`id`) AS max_id , MIN(`id`) AS min_id FROM tb "); $range_row = mysql_fetch_object( $range_result ); $random = mt_rand( $range_row->min_id , $range_row->max_id ); $result = mysql_query( " SELECT * FROM tb WHERE id >= $random LIMIT 0,1 "); } defenitly prepared statements are much more safer but also every where it says that they are much faster BUT in my test on the above code I have: - 2.45 sec for prepared statements - 5.05 sec for the secon example What do you think I'm doing wrong? Should I use the second solution or I should try to optimize the prep stmt?

    Read the article

  • Application log aggregation, management and notifications...

    - by Matthew Savage
    I'm wondering what everyone is using for logging, log management and log aggregation on their systems. I am working in a company which uses .NET for all it's applications and all systems are Windows based. Currently each application looks after its own logging and notifications of failures (e.g. if app A fails it will send out its own 'call for help' to an admin). While this current practice works its a bit hacky and hard to manage. I've been trying to find some options for making this work better and I've come up with the following: log4net & Chainsaw (ah, if it works). Logging via log4net or another framework into a central database & rolling our own management tool. Logging to the Windows event log and using MOM or System Center Operations Manager to aggregate and manage each of these servers & their apps. A hand-rolled solution to suck all the log files into one point and work some magic across them. Essentially what we are after is something which can pull log entries all together and allow for some analytics to be run across them, plus use a kind of event based system to, for example, send out a warning email when there have been 30+ warning level logs for an application in the last x minutes. So is there anything I've missed, or something someone else can suggest?

    Read the article

  • Most efficient way to store list structure in XML

    - by Mike
    Starting a new project and was planning on storing all of my web content in XML. I do not have access to a database so this seemed like the next best thing. One thing I'm struggling with is how to structure the XML for links (which will later be transformed using XSLT). It needs to be fairly flexible as well. Below is what I started with, but I'm starting to question it. <links> <link> <url>http://google.com</url> <description>Google</description> <link> <link> <url>http://yahoo.com</url> <description>Yahoo</description> <links> <url>http://yahoo.com/search</url> <description>Search</description> </link> <link> </links> That should get transformed into Google Yahoo Search Perhaps something like this might work better. <links> <link href="http://google.com">Google</link> <link href="http://yahoo.com">Yahoo <link href="http://yahoo.com/search">Search</link> </link> </links> Does anyone perhaps have a link that talks about structuring web content properly in XML? Thank you. :)

    Read the article

  • Update a PDF to include an encrypted, hidden, unique identifier?

    - by Dave Jarvis
    Background The idea is this: Person provides contact information for online book purchase Book, as a PDF, is marked with a unique hash Person downloads book PDF passwords are annoying and extremely easy to circumvent. The ideal process would be something like: Generate hash based on contact information Store contact information and hash in database Acquire book lock Update an "include" file with hash text Generate book as PDF (using pdflatex) Apply hash to book Release book lock Send email with book download link Technologies The following technologies can be used (other programming languages are possible, but libraries will likely be limited to those supplied by the host): C, Java, PHP LaTeX files PDF files Linux Question What programming techniques (or open source software) should I investigate to: Embed a unique hash (or other mark) to a PDF Create a collusion-attack resistant mark Develop a non-fragile (e.g., PDF -> EPS -> PDF still contains the mark) solution Research I have looked at the following possibilities: Steganography Natural Language Processing (NLP) Convert blank pages in PDF to images; mark those images; reassemble PDF LaTeX watermark package ImageMagick Steganograhy requires keeping a master copy of the images, and I'm not sure if the watermark would survive PDF -> EPS -> PDF, or other types of conversion. LaTeX creates an image cache, so any steganographic process would have to intercept that process somehow. NLP introduces grammatical errors. Inserting blank pages as images is immediately suspect; it is easy to replace suspicious blank pages. The LaTeX watermark package draws visible marks. ImageMagick draws visible marks. What other solutions are possible? Related Links http://www.tcpdf.org/ invisible watermarks in images Thank you!

    Read the article

  • In PHP... best way to turn string representation of a folder structure into nested array

    - by Greg Frommer
    Hi everyone, I looked through the related questions for a similar question but I wasn't seeing quite what I need, pardon if this has already been answered already. In my database I have a list of records that I want represented to the user as files inside of a folder structure. So for each record I have a VARCHAR column called "FolderStructure" that I want to identify that records place in to the folder structure. The series of those flat FolderStructure string columns will create my tree structure with the folders being seperated by backslashes (naturally). I didn't want to add another table just to represent a folder structure... The 'file' name is stored in a separate column so that if the FolderStructure column is empty, the file is assumed to be at the root folder. What is the best way to turn a collection of these records into a series of HTML UL/LI tags... where each LI represents a file and each folder structure being an UL embedded inside it's parent?? So for example: file - folderStructure foo - bar - firstDir blue - firstDir/subdir would produce the following HTML: <ul> <li>foo</li> <ul> <li> bar </li> <ul> <li> blue </li> </ul> </ul> </ul> Thanks

    Read the article

  • Can per-user randomized salts be replaced with iterative hashing?

    - by Chas Emerick
    In the process of building what I'd like to hope is a properly-architected authentication mechanism, I've come across a lot of materials that specify that: user passwords must be salted the salt used should be sufficiently random and generated per-user ...therefore, the salt must be stored with the user record in order to support verification of the user password I wholeheartedly agree with the first and second points, but it seems like there's an easy workaround for the latter. Instead of doing the equivalent of (pseudocode here): salt = random(); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword, salt); Why not use the hash of the username as the salt? This yields a domain of salts that is well-distributed, (roughly) random, and each individual salt is as complex as your salt function provides for. Even better, you don't have to store the salt in the database -- just regenerate it at authentication-time. More pseudocode: salt = hash(username); hashedPassword = hash(salt . password); storeUserRecord(username, hashedPassword); (Of course, hash in the examples above should be something reasonable, like SHA-512, or some other strong hash.) This seems reasonable to me given what (little) I know of crypto, but the fact that it's a simplification over widely-recommended practice makes me wonder whether there's some obvious reason I've gone astray that I'm not aware of.

    Read the article

  • getting proxies of the correct type in nhibernate

    - by Nir
    I have a problem with uninitialized proxies in nhibernate The Domain Model Let's say I have two parallel class hierarchies: Animal, Dog, Cat and AnimalOwner, DogOwner, CatOwner where Dog and Cat both inherit from Animal and DogOwner and CatOwner both inherit from AnimalOwner. AnimalOwner has a reference of type Animal called OwnedAnimal. Here are the classes in the example: public abstract class Animal { // some properties } public class Dog : Animal { // some more properties } public class Cat : Animal { // some more properties } public class AnimalOwner { public virtual Animal OwnedAnimal {get;set;} // more properties... } public class DogOwner : AnimalOwner { // even more properties } public class CatOwner : AnimalOwner { // even more properties } The classes have proper nhibernate mapping, all properties are persistent and everything that can be lazy loaded is lazy loaded. The application business logic only let you to set a Dog in a DogOwner and a Cat in a CatOwner. The Problem I have code like this: public void ProcessDogOwner(DogOwner owner) { Dog dog = (Dog)owner.OwnedAnimal; .... } This method can be called by many diffrent methods, in most cases the dog is already in memory and everything is ok, but rarely the dog isn't already in memory - in this case I get an nhibernate "uninitialized proxy" but the cast throws an exception because nhibernate genrates a proxy for Animal and not for Dog. I understand that this is how nhibernate works, but I need to know the type without loading the object - or, more correctly I need the uninitialized proxy to be a proxy of Cat or Dog and not a proxy of Animal. Constraints I can't change the domain model, the model is handed to me by another department, I tried to get them to change the model and failed. The actual model is much more complicated then the example and the classes have many references between them, using eager loading or adding joins to the queries is out of the question for performance reasons. I have full control of the source code, the hbm mapping and the database schema and I can change them any way I want (as long as I don't change the relationships between the model classes). I have many methods like the one in the example and I don't want to modify all of them. Thanks, Nir

    Read the article

  • Architectural conundrum

    - by Dejan
    The worst thing when working on a one man project is the lack of input that you usually get from your coworkers. And because of the lack of that you tend to make obvious mistakes. After going down that road for some time I would need some help from the community. I started a little home-brew project that should turn into a portal of some sorts. And the main thing that is bothering me is the persistence layer that i have concocted. It should be completely separated from the presentation layer for starters and a OR mapper is also somewhere. This is because I have multiple data stores that have to be used. So the base idea was that the individual "repositories" operate each on their individual database and that the business layer then aggregates the business objects which are then transformed in the presentation layer into view objects. The main problem I face is the following: Multiple classes for the same concept - There is a DAL representation of a user and BL representation of user and a view representation of a user. I can handle the transformation with a tool but is this really the right way. I mean they are all nicely separated, but the overhead is quite something. What do you think? Am I going too deep into the separation of concern rabbit hole or is this still normal?

    Read the article

  • Problem installing mysql gem on Snow Leopard: uninitialized constant MysqlCompat::MysqlRes

    - by emson
    Hi All I've got a problem trying to install the Ruby mysql gem driver. I recently upgraded to Snow Leopard and did the Hivelogic manual install of MySQL. This all seems to work fine as I can access mysql from the command line and make changes to the database. My problem is that if I now use rake db:migrate I get: rake aborted! uninitialized constant MysqlCompat::MysqlRes (See full trace by running task with --trace) Now it appears that my mysql gem isn't working correctly as I can access MySQL fine from Python using the Python driver (which I compiled to). I therefore tried to rebuild the gem using the following command from this site: http://techliberty.blogspot.com/, (incidentally I am using a recent Intel MacBook Pro): sudo env ARCHFLAGS="-arch x86_64" gem install mysql -- --with-mysql-config=/usr/local/mysql/bin/mysql_config This compiles although I get No definition for the documentation: Building native extensions. This could take a while... Successfully installed mysql-2.8.1 1 gem installed Installing ri documentation for mysql-2.8.1... No definition for next_result No definition for field_name ... I'm a little stumped as my mysql_config is located in the correct place: /usr/local/mysql/bin/mysql_config And I have removed all other instances of the mysql gem, from my system. Any suggestions would be greatly appreciated. Many thanks. PS I saw this previous post http://stackoverflow.com/questions/1332207/uninitialized-constant-mysqlcompatmysqlres-using-mms2r-gem but it doesn't seem applicable for my version.

    Read the article

  • Best pattern to load enumerated values from DAL using WCF RIA Services

    - by Dale Halliwell
    I would like to be able to load several RIA entitysets in a single call without chaining/nesting several small LoadOperations together so that they load sequentially. I have several pages that have a number of comboboxes on them. These comboboxes are populated with static values from a database (for example status values). Right now I preload these values in my VM by one method that strings together a series of LoadOperations for each type that I want to load. For example: public void LoadEnums() { context.Load(context.GetMyStatusValues1Query()).Completed += (s, e) => { this.StatusValues1 = context.StatusValues1; context.Load(context.GetMyStatusValues2()).Completed += (s1, e1) => { this.StatusValues2 = context.StatusValues2; context.Load(context.GetMyStatusValues3Query()).Completed += (s2, e2) => { this.StatusValues3 = context.StatusValues3; (....and so on) }; }; }; }; While this works fine, it seems a bit nasty. Also, I would like to know when the last loadoperation completes so that I can load whatever entity I want to work on after this, so that these enumerated values resolve properly in form elements like comboboxes and listboxes. (I think) I can't do this easily above without creating a delegate and calling that on the completion of the last loadoperation. So my question is: does anyone out there know a better pattern to use, ideally where I can load all my static entitysets in a single LoadOperation?

    Read the article

  • Which SCM/VCS cope well with moving text between files?

    - by pfctdayelise
    We are having havoc with our project at work, because our VCS is doing some awful merging when we move information across files. The scenario is thus: You have lots of files that, say, contain information about terms from a dictionary, so you have a file for each letter of the alphabet. Users entering terms blindly follow the dictionary order, so they will put an entry like "kick the bucket" under B if that is where the dictionary happened to list it (or it might have been listed under both B, bucket and K, kick). Later, other users move the terms to their correct files. Lots of work is being done on the dictionary terms all the time. e.g. User A may have taken the B file and elaborated on the "kick the bucket" entry. User B took the B and K files, and moved the "kick the bucket" entry to the K file. Whichever order they end up getting committed in, the VCS will probably lose entries and not "figure out" that an entry has been moved. (These entries are later automatically converted to an SQL database. But they are kept in a "human friendly" form for working on them, with lots of comments, examples etc. So it is not acceptable to say "make your users enter SQL directly".) It is so bad that we have taken to almost manually merging these kinds of files now, because we can't trust our VCS. :( So what is the solution? I would love to hear that there is a VCS that could cope with this. Or a better merge algorithm? Or otherwise, maybe someone can suggest a better workflow or file arrangement to try and avoid this problem?

    Read the article

  • SqlCe odd results why? -- Same SQL, different results in different apps. Issue with

    - by NitroxDM
    When I run this SQl in my mobile app I get zero rows. select * from inventory WHERE [ITEMNUM] LIKE 'PUMP%' AND [LOCATION] = 'GARAGE' When I run the same SQL in Query Analyzer 3.5 using the same database I get my expected one row. Why the difference? Here is the code I'm using in the mobile app: SqlCeCommand cmd = new SqlCeCommand(Query); cmd.Connection = new SqlCeConnection("Data Source="+filePath+";Persist Security Info=False;"); DataTable tmpTable = new DataTable(); cmd.Connection.Open(); SqlCeDataReader tmpRdr = cmd.ExecuteReader(); if (tmpRdr.Read()) tmpTable.Load(tmpRdr); tmpRdr.Close(); cmd.Connection.Close(); return tmpTable; UPDATE: For the sake of trying I used the code found in one of the answers found here and it works as expected. So my code looks like this: SqlCeConnection conn = new SqlCeConnection("Data Source=" + filePath + ";Persist Security Info=False;"); DataTable tmpTable = new DataTable(); SqlCeDataAdapter AD = new SqlCeDataAdapter(Query, conn); AD.Fill(tmpTable); The issue appears to be with the SqlCeDataReader. Hope this helps someone else out!

    Read the article

  • How does PHP PDO work internally ?

    - by Rachel
    I want to use pdo in my application, but before that I want to understand how internally PDOStatement->fetch and PDOStatement->fetchAll. For my application, I want to do something like "SELECT * FROM myTable" and insert into csv file and it has around 90000 rows of data. My question is, if I use PDOStatement->fetch as I am using it here: // First, prepare the statement, using placeholders $query = "SELECT * FROM tableName"; $stmt = $this->connection->prepare($query); // Execute the statement $stmt->execute(); var_dump($stmt->fetch(PDO::FETCH_ASSOC)); while ($row = $stmt->fetch(PDO::FETCH_ASSOC)) { echo "Hi"; // Export every row to a file fputcsv($data, $row); } Will after every fetch from database, result for that fetch would be store in memory ? Meaning when I do second fetch, memory would have data of first fetch as well as data for second fetch. And so if I have 90000 rows of data and if am doing fetch every time than memory is being updated to take new fetch result without removing results from previous fetch and so for the last fetch memory would already have 89999 rows of data. Is this how PDOStatement::fetch works ? Performance wise how does this stack up against PDOStatement::fetchAll ?

    Read the article

  • Hibernate Auto-Increment Setup

    - by dharga
    How do I define an entity for the following table. I've got something that isn't working and I just want to see what I'm supposed to do. USE [BAMPI_TP_dev] GO SET ANSI_NULLS ON GO SET QUOTED_IDENTIFIER ON GO SET ANSI_PADDING ON GO CREATE TABLE [dbo].[MemberSelectedOptions]( [OptionId] [int] NOT NULL, [SeqNo] [smallint] IDENTITY(1,1) NOT NULL, [OptionStatusCd] [char](1) NULL ) ON [PRIMARY] GO SET ANSI_PADDING OFF This is what I have already that isn't working. @Entity @Table(schema="dbo", name="MemberSelectedOptions") public class MemberSelectedOption extends BampiEntity implements Serializable { @Embeddable public static class MSOPK implements Serializable { private static final long serialVersionUID = 1L; @Column(name="OptionId") int optionId; @GeneratedValue(strategy=GenerationType.IDENTITY) @Column(name="SeqNo", unique=true, nullable=false) BigDecimal seqNo; //Getters and setters here... } private static final long serialVersionUID = 1L; @EmbeddedId MSOPK pk = new MSOPK(); @Column(name="OptionStatusCd") String optionStatusCd; //More Getters and setters here... } I get the following ST. [5/25/10 15:49:40:221 EDT] 0000003d JDBCException E org.slf4j.impl.JCLLoggerAdapter error Cannot insert explicit value for identity column in table 'MemberSelectedOptions' when IDENTITY_INSERT is set to OFF. [5/25/10 15:49:40:221 EDT] 0000003d AbstractFlush E org.slf4j.impl.JCLLoggerAdapter error Could not synchronize database state with session org.hibernate.exception.SQLGrammarException: could not insert: [com.bob.proj.ws.model.MemberSelectedOption] at org.hibernate.exception.SQLStateConverter.convert(SQLStateConverter.java:90) at org.hibernate.exception.JDBCExceptionHelper.convert(JDBCExceptionHelper.java:66) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2285) at org.hibernate.persister.entity.AbstractEntityPersister.insert(AbstractEntityPersister.java:2678) at org.hibernate.action.EntityInsertAction.execute(EntityInsertAction.java:79) at org.hibernate.engine.ActionQueue.execute(ActionQueue.java:279) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:263) at org.hibernate.engine.ActionQueue.executeActions(ActionQueue.java:167) at org.hibernate.event.def.AbstractFlushingEventListener.performExecutions(AbstractFlushingEventListener.java:321) at org.hibernate.event.def.DefaultFlushEventListener.onFlush(DefaultFlushEventListener.java:50) at org.hibernate.impl.SessionImpl.flush(SessionImpl.java:1028) at org.hibernate.impl.SessionImpl.managedFlush(SessionImpl.java:366) at org.hibernate.transaction.JDBCTransaction.commit(JDBCTransaction.java:137) at com.bcbst.bamp.ws.dao.MemberSelectedOptionDAOImpl.saveMemberSelectedOption(MemberSelectedOptionDAOImpl.java:143) at com.bcbst.bamp.ws.common.AlertReminder.saveMemberSelectedOptions(AlertReminder.java:76) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)

    Read the article

  • web development using google sites

    - by CHID
    Hi, I have hosted a website. But now my client asks to change it to http://sites.google.com. They hav registered their domain in google. I logged in the site and saw the procedures to create a website from scratch. But is there any way to directly replace the site into google sites. Like, in my site i hav included css files in a folder called stylesheets/css and access it through the link tag. And there are several folders lik that, viz images, scripts etc. 1.Now if i have to transfer it to google sites, where do i create the folders and stuff. I have the priviliges to login to the admin part of the site. 2.And also is there anyway to create database and access it in google sites? 3.Also i see that only html pages can be created. Is there a way to add php pages or other scripting languages? 4.Going forward will google sites be usefull for professional web designing. pls give ideas on whether or not google sites is a good idea to go with

    Read the article

  • Generic unit test scheduling

    - by Raphink
    Hello, I'm (re)writing a program that does generic unit test scheduling. The current program is a mono-threaded Perl program, but I'm willing to modularize it and parallelize the tests. I'm also considering rewriting it in Python. Here is what I need to do: I have a list of tests, with the following attributes: uri: a URI to test (could be HTTP/HTTPS/SSH/local) ; depends: an associative array of tests/values that this test depends on ; join: a list of DB joints to be added when selecting items to process in this test ; depends_db: additional conditions to add to the DB request when selecting items to process in this test. The program builds a dependency tree, beginning with the tests that have no dependencies ; for each test: a list of items is selected from the database using the conditions (results of depending tests, joints and depends_db) ; the list of items is sent to the URI (using POST or stdin) ; the result is retrived as a YAML file listing the state and comments for the test for each tested item ; the results are stored in the DB ; the test returns, allowing depending tests to be performed. the program generates reports (CSV, DB, graphviz) of the performed tests. The primary use of this program currently is to test a fleet of machines against services such as backup, DNS, etc. The tests can then be: - backup: hosted on the backup machine(s), called through HTTP, checks if the machines' backup went well ; - DNS: hosted on the local machine, called via stdin, checks if the machines' fqdn have a valid DNS entry. Does such a tool/module already exist? What would be the best implementation to achieve this (using Perl or Python)?

    Read the article

< Previous Page | 1599 1600 1601 1602 1603 1604 1605 1606 1607 1608 1609 1610  | Next Page >