Search Results

Search found 31968 results on 1279 pages for 'database training'.

Page 724/1279 | < Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >

  • Keeping cached browser data inside ASP update panel textboxes/dropdowns for browser back click

    - by pmlevere
    I'm new in VB.net/asp and am running a VB web application in a visual database program called IronSpeed designer. I'm primarily using IronSpeed in this case for its login/role security features. I have a basic two page setup for this app. The user logs in then is taken to AccountEntry.aspx, they enter data into textboxes and select some dropdown values that are linked to a sql database, then they click "submit" to move to Results.aspx. On Results.aspx, the user can change data and then generate several types of reports (PDF, Excel, etc). I'm used to setting up ASP controls inside ASPContent areas, and in these areas if a user performs a browser back click the previously entered data will still be on the page for potential user modification. However in this web app, IronSpeed is setting up the page and asp controls inside an asp update panel. It appears inside an asp update panel, cached values can't be seen on a browser back click. In this case, it's important that the orginally entered values still be there for the user experience if the user advances to Results.aspx then clicks browser back to modify a value on AccountEntry.aspx. If I have to I'll setup Session Variables and disable browser clicking, but that is last resort. Is there any way to save cached data inside an asp update panel and have it there for a browser back click?

    Read the article

  • Interesting LinqToSql behaviour

    - by Ben Robinson
    We have a database table that stores the location of some wave files plus related meta data. There is a foreign key (employeeid) on the table that links to an employee table. However not all wav files relate to an employee, for these records employeeid is null. We are using LinqToSQl to access the database, the query to pull out all non employee related wav file records is as follows: var results = from Wavs in db.WaveFiles where Wavs.employeeid == null; Except this returns no records, despite the fact that there are records where employeeid is null. On profiling sql server i discovered the reason no records are returned is because LinqToSQl is turning it into SQL that looks very much like: SELECT Field1, Field2 //etc FROM WaveFiles WHERE 1=0 Obviously this returns no rows. However if I go into the DBML designer and remove the association and save. All of a sudden the exact same LINQ query turns into SELECT Field1, Field2 //etc FROM WaveFiles WHERE EmployeeID IS NULL I.e. if there is an association then LinqToSql assumes that all records have a value for the foreign key (even though it is nullable and the property appears as a nullable int on the WaveFile entity) and as such deliverately constructs a where clause that will return no records. Does anyone know if there is a way to keep the association in LinqToSQL but stop this behaviour. A workaround i can think of quickly is to have a calculated field called IsSystemFile and set it to 1 if employeeid is null and 0 otherwise. However this seems like a bit of a hack to work around strange behaviour of LinqToSQl and i would rather do something in the DBML file or define something on the foreign key constraint that will prevent this behaviour.

    Read the article

  • How should I pass the translated text to my object in my multilingual application?

    - by boatingcow
    Up until now, I have maintained a 'dictionary' table in my database, for example: +-----------+---------------------------------------+-----------------------------------------------+--------+ | phrase | en | fr | etc... | +-----------+---------------------------------------+-----------------------------------------------+--------+ | generated | Generated in %1$01.2f seconds at %2$s | Créée en %1$01.2f secondes à %2$s aujourd'hui | ... | | submit | Submit... | Envoyer... | ... | +-----------+---------------------------------------+-----------------------------------------------+--------+ I'll then select all rows from the database for the column that matches the locale we're interested in (or read the cache from a file to speed db lookup) and dump the dictionary into an array called $lng. Then I'll have HTML helper objects like this in my view: $html->input(array('type' => 'submit', 'value' => $lng['submit'], etc...)); ... $html->div(array('value' => sprintf($lng['generated'], $generated, date('H:i')), etc...)); The translations can appear in PDF, XLS and AJAX responses too. The problem with my approach so far is that I now have loads of global $lng; in every class where there is a function that spits out UI code.. How do other people get the translation into the object? Is it one scenario where globals aren't actually that bad? Would it be madness to create a class with accessors when the dictionary terms are all static?

    Read the article

  • Basic security, PHP mySQl

    - by yuudachi
    So I am making a basic log-in page. I have a good idea of what to do, but I'm still unsure of some things. I have a database full of students and a password column of course. I know I'm going to use md5 encryption in that column. The student enters their e-mail and student ID, and they get e-mailed a password if correct. But, where do I create the password? Do I have to manually add the password (which is just a randomly generated string) in mySQL to all the students? And I am suppose to send the password to the student; how will I know what to send the student if the password is encrypted? I was thinking about generating the password when the student first enters their e-mail and student ID. They get an e-mail of the random string, and at the same time, I add the same random string to the database, encrypted. Is that how it's suppose to work though? And it feels unsafe doing that all on the same page. Sorry for the long-winded, newbish question. I find this all facisnating at the same time as well (AES and RSA encryption :O)

    Read the article

  • Trouble with building up a string in Clojure

    - by Aki Iskandar
    Hi gang - [this may seem like my problem is with Compojure, but it isn't - it's with Clojure] I've been pulling my hair out on this seemingly simple issue - but am getting nowhere. I am playing with Compojure (a light web framework for Clojure) and I would just like to generate a web page showing showing my list of todos that are in a PostgreSQL database. The code snippets are below (left out the database connection, query, etc - but that part isn't needed because specific issue is that the resulting HTML shows nothing between the <body> and </body> tags). As a test, I tried hard-coding the string in the call to main-layout, like this: (html (main-layout "Aki's Todos" "Haircut<br>Study Clojure<br>Answer a question on Stackoverfolw")) - and it works fine. So the real issue is that I do not believe I know how to build up a string in Clojure. Not the idiomatic way, and not by calling out to Java's StringBuilder either - as I have attempted to do in the code below. A virtual beer, and a big upvote to whoever can solve it! Many thanks! ============================================================= ;The master template (a very simple POC for now, but can expand on it later) (defn main-layout "This is one of the html layouts for the pages assets - just like a master page" [title body] (html [:html [:head [:title title] (include-js "todos.js") (include-css "todos.css")] [:body body]])) (defn show-all-todos "This function will generate the todos HTML table and call the layout function" [] (let [rs (select-all-todos) sbHTML (new StringBuilder)] (for [rec rs] (.append sbHTML (str rec "<br><br>"))) (html (main-layout "Aki's Todos" (.toString sbHTML))))) ============================================================= Again, the result is a web page but with nothing between the body tags. If I replace the code in the for loop with println statements, and direct the code to the repl - forgetting about the web page stuff (ie. the call to main-layout), the resultset gets printed - BUT - the issue is with building up the string. Thanks again. ~Aki

    Read the article

  • Export the datagrid data to text in asp.net+c#.net

    - by SRIRAM
    Problem:It will asks there is no assembly reference/namespace for Database Database db = DatabaseFactory.CreateDatabase(); DBCommandWrapper selectCommandWrapper = db.GetStoredProcCommandWrapper("sp_GetLatestArticles"); DataSet ds = db.ExecuteDataSet(selectCommandWrapper); StringBuilder str = new StringBuilder(); for(int i=0;i<=ds.Tables[0].Rows.Count - 1; i++) { for(int j=0;j<=ds.Tables[0].Columns.Count - 1; j++) { str.Append(ds.Tables[0].Rows[i][j].ToString()); } str.Append("<BR>"); } Response.Clear(); Response.AddHeader("content-disposition", "attachment;filename=FileName.txt"); Response.Charset = ""; Response.Cache.SetCacheability(HttpCacheability.NoCache); Response.ContentType = "application/vnd.text"; System.IO.StringWriter stringWrite = new System.IO.StringWriter(); System.Web.UI.HtmlTextWriter htmlWrite = new HtmlTextWriter(stringWrite); Response.Write(str.ToString()); Response.End();

    Read the article

  • Extract data from PostgreSQL DB without using pg_dump

    - by John Horton
    There is a PostgreSQL database on which I only have limited access (e.g, I can't use pg_dump). I am trying to create a local "mirror" by exporting certain tables from the database. I do not have the permissions needed to just dump a table as SQL from within psql. Right now, I just have a Python script that iterates through my table_names, selects all fields and then exports them as a CSV: for table_name, file_name in zip(table_names, file_names): cmd = """echo "\\\copy (select * from %s)" to stdout WITH CSV HEADER | psql -d remote_db | gzip > ./%s/%s.gz"""%(table_name,dir_name,file_name) os.system(cmd) I would like to not use CSV if possible, as I lose the field types and the encoding can get messed up. First best would probably be some way of getting the generating SQL code for the table using \copy. Next best would be XML, ideally with some way of preserving the field types. If that doesn't work, I think the final option might be two queries---one to get the field data types, the other to get the actual data. Any thoughts or advice would be greatly appreciated - thanks!

    Read the article

  • Association Mapping Details confusion?

    - by AaronLS
    I have never understood why the associations in EntityFramework look the way they do in the Mapping Details window. When I select the line between 2 tables for an association, for example FK_ApplicationSectionsNodes_FormItems, it shows this: Association Maps to ApplicationSectionNodes FormItems (key symbol) FormItemId:Int32 <--> FormItemId:int ApplicationSectionNodes (key symbol) NodeId:Int32 <--> (key symbol) NodeId : int Fortunately this one was create automatically for me based on the foreign key constraints in my database, but whenever no constraints exist, I have a hard to creating associations manually(when the database doesn't have a diagram setup) because I don't understand the mapping details for associations. FormItems table has a primary key identity column FormItemId, and ApplicationSectionNodes contains a FormItemId column that is the foreign key and has NodeId as a primary key identity column. What really makes no sense to me is why the association has anything listed about the NodeId, when NodeId doesn't have anything to do with the foreign key relationship? (It's even more confusing with self referencing relationships, but maybe if I could understand the above case I'd have a better handle). CREATE TABLE [dbo].[ApplicationSectionNodes]( [NodeID] [int] IDENTITY(1,1) NOT NULL, [OutlineText] [varchar](5000) NULL, [ParentNodeID] [int] NULL, [FormItemId] [int] NULL, CONSTRAINT [PK_ApplicationSectionNodes] PRIMARY KEY CLUSTERED ( [NodeID] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY], CONSTRAINT [UQ_ApplicationSectionNodesFormItemId] UNIQUE NONCLUSTERED ( [FormItemId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[ApplicationSectionNodes] WITH NOCHECK ADD CONSTRAINT [FK_ApplicationSectionNodes_ApplicationSectionNodes] FOREIGN KEY([ParentNodeID]) REFERENCES [dbo].[ApplicationSectionNodes] ([NodeID]) GO ALTER TABLE [dbo].[ApplicationSectionNodes] NOCHECK CONSTRAINT [FK_ApplicationSectionNodes_ApplicationSectionNodes] GO ALTER TABLE [dbo].[ApplicationSectionNodes] WITH NOCHECK ADD CONSTRAINT [FK_ApplicationSectionNodes_FormItems] FOREIGN KEY([FormItemId]) REFERENCES [dbo].[FormItems] ([FormItemId]) GO ALTER TABLE [dbo].[ApplicationSectionNodes] NOCHECK CONSTRAINT [FK_ApplicationSectionNodes_FormItems] GO FormItems Table: CREATE TABLE [dbo].[FormItems]( [FormItemId] [int] IDENTITY(1,1) NOT NULL, [FormItemType] [int] NULL, CONSTRAINT [PK_FormItems] PRIMARY KEY CLUSTERED ( [FormItemId] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ON [PRIMARY] ) ON [PRIMARY] GO ALTER TABLE [dbo].[FormItems] WITH NOCHECK ADD CONSTRAINT [FK_FormItems_FormItemTypes] FOREIGN KEY([FormItemType]) REFERENCES [dbo].[FormItemTypes] ([FormItemTypeId]) GO ALTER TABLE [dbo].[FormItems] NOCHECK CONSTRAINT [FK_FormItems_FormItemTypes] GO

    Read the article

  • Validations for a has_many/belongs_to relationship

    - by Craig Walker
    I have a Recipe model which has_many Ingredients (which in turn belongs_to Recipe). I want Ingredient to be existent dependent on Recipe; an Ingredient should never exist without a Recipe. I'm trying to enforce the presence of a valid Recipe ID in the Ingredient. I've been doing this with a validates :recipe, :presence => true (Rails 3) statement in Ingredient. This works fine if I save the Recipe before adding an Ingredient to it's ingredients collection. However, if I don't have explicit control over the saving (such as when I'm creating a Recipe and its Ingredients from a nested form) then I get an error: Ingredients recipe can't be blank I can get around this simply by dropping the presence validation on Ingredient.recipe. However, I don't particularly like this, as it means I'm working without a safety net. What is the best way to enforce existence-dependence in Rails? Things I'm considering (please comment on the wisdom of each): Adding a not-null constraint on the ingredients.recipe_id database column, and letting the database do the checking for me. A custom validation that somehow checks whether the Ingredient is in an unsaved recipe's ingredient collection (and thus can't have a recipe_id but is still considered valid).

    Read the article

  • How to create view model without sorting collections in memory.

    - by Chevex
    I have a view model (below). public class TopicsViewModel { public Topic Topic { get; set; } public Reply LastReply { get; set; } } I want to populate an IQueryable<TopicsViewModel> with values from my IQueryable<Topic> collection and IQueryable<Reply> collection. I do not want to use the attached entity collection (i.e. Topic.Replies) because I only want the last reply for that topic and doing Topic.Replies.Last() loads the entire entity collection in memory and then grabs the last one in the list. I am trying to stay in IQueryable so that the query is executed in the database. I also don't want to foreach through topics and query replyRepository.Replies because looping through IQueryable<Topic> will start the lazy loading. I'd prefer to build one expression and have all the leg work done in the lower layers. I have the following: IQueryable<TopicsViewModel> topicsViewModel = from x in topicRepository.Topics from y in replyRepository.Replies where y.TopicID == x.TopicID orderby y.PostedDate ascending select new TopicsViewModel { Topic = x, LastReply = y }; But this isn't working. Any ideas how I can populate an IQueryable or IEnumerable of TopicsViewModel so that it queries the database and grabs topics and that topic's last reply? I am trying really hard to avoid grabbing all replies related to that topic. I only want to grab the last reply. Thank you for any insight you have to offer.

    Read the article

  • PHP Class Question

    - by Jerry
    Hi all I am trying to build a PHP class to check the username and password in MySql. I am getting "mysql_query(): supplied argument is not a valid MySQL-Link resource in D:\2010Portfolio\football\main.php on line 38" database has errors:" message. When I move my userQuery code out of my Class, it works fine. Only if it is inside my Class. I am not sure if the problem is that I don't build the connection to mysql inside my Class or not. Thanks for any helps!! Here is my code <?php $userName=$_POST['userName']; $userPw=$_POST['password']; class checkUsers { public $userName; public $userPw; function checkUsers($userName='', $userPw=''){ $this->userName=$userName; $this->userPw=$userPw; } function check1(){ if (empty($this->userName) || empty($this->userPw)){ $message="<strong>Please Enter Username and Password</strong>"; }else{ //line 38 is next line $userQuery=mysql_query("SELECT userName, userPw FROM user WHERE userName='$this->userName' and userPw='$this->userPw'", $connection); if (!$userQuery){ die("database has errors: ". mysql_error()); } if(mysql_num_rows($userQuery)==0){ $message="Please enter valid username and password"; } }//end empty check return $message; }//end check1 method }//end Class $checkUser=new checkUsers($userName, $userPw); echo $checkUser->check1(); ?> I appreciate any helps!!!!

    Read the article

  • .htacces Rewrite Rule to Keep .php File Extensions

    - by user2672112
    I'm upgrading my static website that had .php extensions on the content pages. I've created my own simple cms which will start retrieving data from mysql database from now on, keeping the url structure same as the old once. The cms has get function to retrieve url structure from the database. Overall it started working fine with .html when i tested. But when i change the .html extension to .php in my .htaccess code the content pages starts reflecting "Internal Server Error. The server encountered an internal error or misconfiguration and was unable to complete your request." Here is my .htaccess code which i've used: RewriteBase / Options +FollowSymLinks RewriteEngine On RewriteRule ^([^?]*).php$ content.php?pid=$1 Perhaps there is a conflict, here is the code with .html extension that actually works fine. RewriteBase / Options +FollowSymLinks RewriteEngine On RewriteRule ^([^?]*).html$ content.php?pid=$1 So basically, content pages with .html are working & .php are not working. But i need my content pages to be with .php Please help. Thanks in advance... :)

    Read the article

  • [CODE GENERATION] How to generate DELETE statements in PL/SQL, based on the tables FK relations?

    - by The chicken in the kitchen
    Is it possible via script/tool to generate authomatically many delete statements based on the tables fk relations, using Oracle PL/SQL? In example: I have the table: CHICKEN (CHICKEN_CODE NUMBER) and there are 30 tables with fk references to its CHICKEN_CODE that I need to delete; there are also other 150 tables foreign-key-linked to that 30 tables that I need to delete first. Is there some tool/script PL/SQL that I can run in order to generate all the necessary delete statements based on the FK relations for me? (by the way, I know about cascade delete on the relations, but please pay attention: I CAN'T USE IT IN MY PRODUCTION DATABASE, because it's dangerous!) I'm using Oracle DataBase 10G R2. This is the result I've written, but it is not recursive: This is a view I have previously written, but of course it is not recursive! CREATE OR REPLACE FORCE VIEW RUN ( OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, VINCOLO ) AS SELECT OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME, '(' || LTRIM ( EXTRACT (XMLAGG (XMLELEMENT ("x", ',' || COLUMN_NAME)), '/x/text()'), ',') || ')' VINCOLO FROM ( SELECT CON1.OWNER OWNER_1, CON1.TABLE_NAME TABLE_NAME_1, CON1.CONSTRAINT_NAME CONSTRAINT_NAME_1, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME FROM DBA_CONSTRAINTS CON, DBA_CONS_COLUMNS COL, DBA_CONSTRAINTS CON1 WHERE CON.OWNER = 'TABLE_OWNER' AND CON.TABLE_NAME = 'TABLE_OWNED' AND ( (CON.CONSTRAINT_TYPE = 'P') OR (CON.CONSTRAINT_TYPE = 'U')) AND COL.TABLE_NAME = CON1.TABLE_NAME AND COL.CONSTRAINT_NAME = CON1.CONSTRAINT_NAME --AND CON1.OWNER = CON.OWNER AND CON1.R_CONSTRAINT_NAME = CON.CONSTRAINT_NAME AND CON1.CONSTRAINT_TYPE = 'R' GROUP BY CON1.OWNER, CON1.TABLE_NAME, CON1.CONSTRAINT_NAME, CON1.DELETE_RULE, CON1.STATUS, CON.TABLE_NAME, CON.CONSTRAINT_NAME, COL.POSITION, COL.COLUMN_NAME) GROUP BY OWNER_1, CONSTRAINT_NAME_1, TABLE_NAME_1, TABLE_NAME; ... and it contains the error of using DBA_CONSTRAINTS instead of ALL_CONSTRAINTS...

    Read the article

  • Export large amount of data from Oracle 10G to SQL Server 2005

    - by uniball
    Dear all, I need to export 100 million data rows (avg row length ~ 100 bytes) from Oracle 10G database table into SQL server (over WAN/VLAN with 6MBits/sec capacity) on a regular basis. So far, these are the options that I have tried and a quick summary. Has anyone tried this before? Are there other better options? Which option would be the best in terms of performance and reliability? The time taken has been calculated using tests on smaller amounts of data and then extrapolating it to estimate the time required. Using data import wizard on the SQL server or SSIS packages to import the data. It will take around 150 hours to complete the task. Using Oracle batch job to spool data into a comma-delimited flat-file. Then using SSIS package to FTP this file to the SQL server and then load directly from the flat-file. The issue here is the size of the flat-file which is expected to run in GBs. Although this option is drastically different, I am even considering the option of using Linked Server to query the Oracle data directly at run-time to avoid bringing in data. Performance is a big problem and I have limited control over the Oracle database in terms of creating table indexes. Regards, Uniball

    Read the article

  • What should i do for accomodating large scale data storage and retrieval?

    - by kailashbuki
    There's two columns in the table inside mysql database. First column contains the fingerprint while the second one contains the list of documents which have that fingerprint. It's much like an inverted index built by search engines. An instance of a record inside the table is shown below; 34 "doc1, doc2, doc45" The number of fingerprints is very large(can range up to trillions). There are basically following operations in the database: inserting/updating the record & retrieving the record accoring to the match in fingerprint. The table definition python snippet is: self.cursor.execute("CREATE TABLE IF NOT EXISTS `fingerprint` (fp BIGINT, documents TEXT)") And the snippet for insert/update operation is: if self.cursor.execute("UPDATE `fingerprint` SET documents=CONCAT(documents,%s) WHERE fp=%s",(","+newDocId, thisFP))== 0L: self.cursor.execute("INSERT INTO `fingerprint` VALUES (%s, %s)", (thisFP,newDocId)) The only bottleneck i have observed so far is the query time in mysql. My whole application is web based. So time is a critical factor. I have also thought of using cassandra but have less knowledge of it. Please suggest me a better way to tackle this problem.

    Read the article

  • Auto increment with a Unit Of Work

    - by Derick
    Context I'm building a persistence layer to abstract different types of databases that I'll be needing. On the relational part I have mySQL, Oracle and PostgreSQL. Let's take the following simplified MySQL tables: CREATE TABLE Contact ( ID varchar(15), NAME varchar(30) ); CREATE TABLE Address ( ID varchar(15), CONTACT_ID varchar(15), NAME varchar(50) ); I use code to generate system specific alpha numeric unique ID's fitting 15 chars in this case. Thus, if I insert a Contact record with it's Addresses I have my generated Contact.ID and Address.CONTACT_IDs before committing. I've created a Unit of Work (amongst others) as per Martin Fowler's patterns to add transaction support. I'm using a key based Identity Map in the UoW to track the changed records in memory. It works like a charm for the scenario above, all pretty standard stuff so far. The question scenario comes in when I have a database that is not under my control and the ID fields are auto-increment (or in Oracle sequences). In this case I do not have the db generated Contact.ID beforehand, so when I create my Address I do not have a value for Address.CONTACT_ID. The transaction has not been started on the DB session since all is kept in the Identity Map in memory. Question: What is a good approach to address this? (Avoiding unnecessary db round trips) Some ideas: Retrieve the last ID: I can do a call to the database to retrieve the last Id like: SELECT Auto_increment FROM information_schema.tables WHERE table_name='Contact'; But this is MySQL specific and probably something similar can be done for the other databases. If do this then would need to do the 1st insert, get the ID and then update the children (Address.CONTACT_IDs) – all in the current transaction context.

    Read the article

  • DBTransactions between stateless calls using GUIDs

    - by Marty Trenouth
    I'm looking to add transactional support to my DB engine and providing to Abstract Transaction Handling down to passing in Guids with the DB Action Command. The DB engine would run similar to: private static Database DB; public static Dictionary<Guid,DBTransaction> Transactions = new ...() public static void DoDBAction(string cmdstring,List<Parameter> parameters,Guid TransactionGuid) { DBCommand cmd = BuildCommand(cmdstring,parameters); if(Transactions.ContainsKey(TransactionGuid)) cmd.Transaction = Transactions[TransactionGuid]; DB.ExecuteScalar(cmd); } public static BuildCommand(string cmd, List<Parameter> parameters) { // Create DB command from EntLib Database and assign parameters } public static Guid BeginTransaction() { // creates new Transaction adding it to "Transactions" and opens a new connection } public static Guid Commit(Guid g) { // Commits Transaction and removes it from "Transactions" and closes connection } public static Guid Rollback(Guid g) { // Rolls back Transaction and removes it from "Transactions" and closes connection } The Calling system would run similar to: Guid g try { g = DBEngine.BeginTransaction() DBEngine.DoDBAction(cmdstring1, parameters,g) // do some other stuff DBEngine.DoDBAction(cmdstring2, parameters2,g) // sit here and wait for a response from other item DBEngine.DoDBAction(cmdstring3, parameters3,g) DBEngine.Commit(g) } catch(Exception){ DBEngine.Rollback(g);} Does this interfere with .NET connection pooling (other than a connection be accidently left open)? Will EntLib keep the connection open until the commit or rollback?

    Read the article

  • What's the best way to transfer a large dataset over a .NET web service?

    - by Malvineous
    I've inherited a C# .NET application which talks to a web service, and the web service talks to an Oracle database. I need to add an export function to the UI, to produce an Excel spreadsheet of some of the data. I have created a web service function to run a database query, load the data into a DataTable and then return it, which works fine for a small number of rows. However there is enough data in the full run that the client application locks up for a few minutes and then returns a timeout error. Obviously this isn't the best way to retrieve such a large dataset. Before I go ahead and come up with some dodgy way of splitting the call, I'm wondering if there is already something in place that can handle this. At the moment I'm thinking of a startExport function then repeatedly calling a next50Rows function until there is no data left, but because web services are stateless this means I'm going to have to keep some sort of ID number around and deal with the associated permissions. It would mean that I don't have to load the entire data set into the web server's memory though, which is one good thing. So if anyone knows a better way to retrieve a large amount of data (in a table format) over a .NET web service, please let me know!

    Read the article

  • How SQLite on Android handles long strings?

    - by Levara
    I'm wondering how Android's implementation of SQLite handles long Strings. Reading from online documentation on sqlite, it said that strings in sqlite are limited to 1 million characters. My strings are definitely smaller. I'm creating a simple RSS application, and after parsing a html document, and extracting text, I'm having problem saving it to a database. I have 2 tables in database, feeds and articles. RSS feeds are correctly saved and retrieved from feeds table, but when saving to the articles table, logcat is saying that it cannot save extracted text to it's column. I don't know if other columns are making problems too, no mention of them in logcat. I'm wondering, since text is from an article on web, are signs like (",',;) creating problems? Is Android automaticaly escaping them, or I have to do that. I'm using a technique for inserting similar to one in notepad tutorial: public long insertArticle(long feedid, String title, String link, String description, String h1,tring h2, String h3, String p, String image, long date) { ContentValues initialValues = new ContentValues(); initialValues.put(KEY_FEEDID, feedid); initialValues.put(KEY_TITLE, title); initialValues.put(KEY_LINK, link); initialValues.put(KEY_DESCRIPTION, description ); initialValues.put(KEY_H1, h1 ); initialValues.put(KEY_H2, h2); initialValues.put(KEY_H3, h3); initialValues.put(KEY_P, p); initialValues.put(KEY_IMAGE, image); initialValues.put(KEY_DATE, date); return mDb.insert(DATABASE_TABLE_ARTICLES,null, initialValues); } Column P is for extracted text, h1, h2 and h3 are for headers from a page. Logcat reports only column p to be the problem. The table is created with following statement: private static final String DATABASE_CREATE_ARTICLES = "create table articles( _id integer primary key autoincrement, feedid integer, title text, link text not null, description text," + "h1 text, h2 text, h3 text, p text, image text, date integer);";

    Read the article

  • ZF2 - How to use the Hydrator/exchangeArray() to populate a nested object

    - by Dominic Watson
    I've got an object with values that are stored in my database. My object also contains another object which is stored in the database using just the ID of it (foreign key). http://framework.zend.com/manual/2.0/en/modules/zend.stdlib.hydrator.html Before the Hydrator/exchangeArray functionality in ZF2 you would use a Mapper to grab everything you need to create the object. Now I'm trying to eliminate this extra layer by just using Hydration/exchangeArray to populate my objects but am a bit stuck on creating the nested object. Should my entity have the Inner object's table injected into it so I can create it if the ID of it is passed to my 'exchangeArray' ? Here are example entities as an example. // Village id, name, position, square_id // Map Square id, name, type Upon sending square_id to my Village's exchangeArray() function. It would get the mapTable and use hydrator to pull in the square using the ID I have. It doesn't seem right to be to have mapper instances inside my entity as I thought they should be disconnected from anything but it's own entity specific parameters and functionality?

    Read the article

  • ASP.NET Web Application: use 1 or multiple virtual directories

    - by tster
    I am working on a (largish) internal web application which has multiple modules (security, execution, features, reports, etc.). All the pages in the app share navigation, CSS, JS, controls, etc. I want to make a single "Web Application" project, which includes all the pages for the app, then references various projects which will have the database and business logic in them. However, some of the people on the project want to have separate projects for the pages of each module. To make this more clear, this is what I'm advocating to be the projects. /WebInterface* /SecurityLib /ExecutionLib etc... And here is what they are advocating: /SecurityInterface* /SecutiryLib /ExecutionInterface* /ExecutionLib etc... *project will be published to a virtual directory of IIS Basically What I'm looking for is the advantages of both approaches. Here is what I can think of so far: Single Virtual Directory Pros Modules can share a single MasterPage Modules can share UserControls (this will be common) Links to other modules are within the same Virtual directory, and thus don't need to be fully qualified. Less chance of having incompatible module versions together. Multiple Virtual Directories Pros Can publish a new version of a single module without disrupting other modules Module is more compartmentalized. Less likely that changes will break other modules. I don't buy those arguments though. First, using load balanced servers (which we will have) we should be able to publish new versions of the project with zero downtime assuming there are no breaking database changes. Second, If something "breaks" another module, then there is either an improper dependency or the break will show up eventually in the other module, when the developers copy over the latest version of the UserControl, MasterPage or dll. As a point of reference, there are about 10 developers on the project for about 50% of their time. The initial development will be about 9 months.

    Read the article

  • How can I share an entity framework model across website users

    - by richardmoss
    Hello, Currently my website is based around MVC and the Entity Framework running against a SQL Server 2005 database. So far, it has all been running very smoothly, and I really enjoy MVC and its slimmer more concise code (and no huge viewstates or soul destroying postbacks ;)) Recently I was working on upgrading the site to use a simple forum system, and this is where I started running into problems. When I was testing the site using two different browsers, if I created or replied to a post in one browser, the other browser couldn't see the post. At the moment, each visitor to the site gets their own copy of the entity model, which I store in their session data. Obviously this is the problem as updates to one model aren't getting carried to the other. As a test, I tried storing a single copy of the model which all visitors would access by assigning the model to a static variable. This worked, and both browsers could see each others modifications. However, it had its side effects. For example, if I fired up both browsers at the same time and the model was initialized, one browser would crash, and the other would work fine, despite me using a locking object so in theory one of them should have been delayed until the model was ready (of course I could have implemented this wrong ;)). Also, originally this site did use one model for all visitors and when it was live, it frequently shut down - killing the IIS application pool while it did. Now I'm not sure if this was related, but I don't really want to reintroduce whatever bug I had that caused this shut down. So, my question is a simple one really - what is the best way of either using the same model for all website users so they all see updates, or if they do have separate copies (which I imagine will have a performance impact in time) how can the models detect changes in the database and update themselves according. Thanks in advance for any advice! Regards; Richard Moss

    Read the article

  • How to write to a text file in pipe delimited format from SQL Server / ASP.Net?

    - by NJTechGuy
    I have a text file which needs to be constantly updated (regular intervals). All I want is the syntax and possibly some code that outputs data from a SQL Server database using ASP.Net. The code I have so far is : <%@ Import Namespace="System.IO" %> <script language="vb" runat="server"> sub Page_Load(sender as Object, e as EventArgs) Dim FILENAME as String = Server.MapPath("Output.txt") Dim objStreamWriter as StreamWriter ' If Len(Dir$(FILENAME)) > 0 Then Kill(FILENAME) objStreamWriter = File.AppendText(FILENAME) objStreamWriter.WriteLine("A user viewed this demo at: " & DateTime.Now.ToString()) objStreamWriter.Close() Dim objStreamReader as StreamReader objStreamReader = File.OpenText(FILENAME) Dim contents as String = objStreamReader.ReadToEnd() lblNicerOutput.Text = contents.Replace(vbCrLf, "<br>") objStreamReader.Close() end sub </script> <asp:label runat="server" id="lblNicerOutput" Font-Name="Verdana" /> With PHP, it is a breeze, but with .Net I have no clue. If you could help me with the database connectivity and how to write the data in pipe delimited format to an Output.txt file, that had be awesome. Thanks guys!

    Read the article

  • NHibernate correct way to reattach cached entity to different session

    - by Chris Marisic
    I'm using NHibernate to query a list of objects from my database. After I get the list of objects over them I iterate over the list of objects and apply a distance approximation algorithm to find the nearest object. I consider this function of getting the list of objects and apply the algorithm over them to be a heavy operation so I cache the object which I find from the algorithm in HttpRuntime.Cache. After this point whenever I'm given the supplied input again I can just directly pull the object from Cache instead of having to hit the database and traverse the list. My object is a complex object that has collections attached to it, inside the query where I return the full list of objects I don't bring back any of the sub collections eagerly so when I read my cached object I need lazy loading to work correctly to be able to display the object fully. Originally I tried using this to re-associate my cached object back to a new session _session.Lock(obj, LockMode.None); However when accessing the page concurrently from another instance I get the error Illegal attempt to associate a collection with two open sessions I then tried something different with _session.Merge(obj); However watching the output of this in NHProf shows that it is deleting and re-associating my object's contained collections with my object, which is not what I want although it seems to work fine. What is the correct way to do this? Neither of these seem to be right.

    Read the article

  • How can I shared controller logic in ASP.NET MVC for 2 controllers, where they are overriden

    - by AbeP
    Hello, I am trying to implement user-friendly URLS, while keeping the existing routes, and was able to do so using the ActionName tag on top of my controller (http://stackoverflow.com/questions/436866/can-you-overload-controller-methods-in-asp-net-mvc) I have 2 controllers: ActionName("UserFriendlyProjectIndex")] public ActionResult Index(string projectName) { ... } public ActionResult Index(long id) { ... } Basically, what I am trying to do is I store the user-friendly URL in the database for each project. If the user enters the URL /Project/TopSecretProject/, the action UserFriendlyProjectIndex gets called. I do a database lookup and if everything checks out, I want to apply the exact same logic that is used in the Index action. I am basically trying to avoid writing duplicate code. I know I can separate the common logic into another method, but I wanted to see if there is a built-in way of doing this in ASP.NET MVC. Any suggestions? I tried the following and I go the View could not be found error message: [ActionName("UserFriendlyProjectIndex")] public ActionResult Index(string projectName) { var filteredProjectName = projectName.EscapeString().Trim(); if (string.IsNullOrEmpty(filteredProjectName)) return RedirectToAction("PageNotFound", "Error"); using (var db = new PIMPEntities()) { var project = db.Project.Where(p => p.UserFriendlyUrl == filteredProjectName).FirstOrDefault(); if (project == null) return RedirectToAction("PageNotFound", "Error"); return View(Index(project.ProjectId)); } } Here's the error message: The view 'UserFriendlyProjectIndex' or its master could not be found. The following locations were searched: ~/Views/Project/UserFriendlyProjectIndex.aspx ~/Views/Project/UserFriendlyProjectIndex.ascx ~/Views/Shared/UserFriendlyProjectIndex.aspx ~/Views/Shared/UserFriendlyProjectIndex.ascx Project\UserFriendlyProjectIndex.spark Shared\UserFriendlyProjectIndex.spark I am using the SparkViewEngine as the view engine and LINQ-to-Entities, if that helps. thank you!

    Read the article

< Previous Page | 720 721 722 723 724 725 726 727 728 729 730 731  | Next Page >