Search Results

Search found 28024 results on 1121 pages for 'sql 2014'.

Page 1015/1121 | < Previous Page | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  | Next Page >

  • how I delete a row in a table in Joomla?

    - by Sara
    I have a table id h_id t_id 1 3 1 2 3 2 3 3 3 4 4 2 5 4 3 id is the primary key. I have not created a JTable for this table. Now I want to delete rows by h_id. Are there any method like which I can use without writing a sql DELETE query? $db = JFactory::getDBO(); $row =& $this->getTable('tablename'); $row->delete($pk); Any better solution will be greatly appreciated.

    Read the article

  • Build Systems for PHP Web Apps

    - by macinjosh
    I want to start automating more of my web development process so I'm looking for a build system. I write mostly PHP apps on Mac OS X and deploy Linux servers over FTP. A lot of my clients have basic hosting providers so shell access to their servers is typically not available, however remote MySQL access is usually present. Here is what I want to do with a build system: When Building: Lint JavaScript Files Validate CSS Files Validate HTML Files Minify and concatenate JS and CSS files Verify PHP Syntax Set Debug/Production flags When Deploying Checkout latest version from SVN Run build process Upload files to server via FTP Run SQL scripts on remote DB I realize this is a lot of work to automate but I think it would be worth it. So what is the best way to start down this path? Is there a system that can handle builds and deploys, or should I search for separate solutions? What systems would you recommend?

    Read the article

  • Using * in SELECT Query

    - by libregeek
    I am currently porting an application written in MySQL3 and PHP4 to MySQL5 and PHP5. On analysis I found several SQL queries which uses "select * from tablename" even if only one column(field) is processed in PHP. The table has almost 60 columns and it has a primary key. In most cases, the only column used is id which is the primary key. Will there be any performance boost if I use queries in which the column names are explicitly mentioned instead of * ? (In this application there is only one method which we need all the columns and all other methods return only a subset of the columns)

    Read the article

  • NoSql Crash Course/Tutorial

    - by Chris Thompson
    Hi all, I've seen NoSQL pop up quite a bit on SO and I have a solid understanding of why you would use it (from here, Wikipedia, etc). This could be due to the lack of concrete and uniform definition of what it is (more of a paradigm than concrete implementation), but I'm struggling to wrap my head around how I would go about designing a system that would use it or how I would implement it in my system. I'm really stuck in a relational-db mindset thinking of things in terms of tables and joins... At any rate, does anybody know of a crash course/tutorial on a system that would use it (kind of a "hello world" for a NoSQL-based system) or a tutorial that takes an existing "Hello World" app based on SQL and converts it to NoSQL (not necessarily in code, but just a high-level explanation). I see this having one solid answer, but if you guys feel like it should be community wiki, I'll be happy to change it. Thanks! Chris

    Read the article

  • How to select a names from a column MS Excel 2007

    - by Alks
    Hey guys, I have an Excel document which has a list of students and their group names. I have another sheet within the the same excel document which is called comments. In this sheet, I would like to have a list of individual team names listed. There are 65 students and 14 defined groups. Is there a way to select the 14 group names, without repitition? Cell B3-B67 have the student names. Cell C3-C67 have the team names. The team names are entered against each student. I know in SQL I could use something like select distinct(team_name) but in Excel, how can I replicate this? Cheers, Alks.

    Read the article

  • NHibernate Lazy="Extra"

    - by Adam Rackis
    Is there a good explanation out there on what exactly lazy="extra" is capable of? All the posts I've seen all just repeat the fact that it turns references to MyObject.ItsCollection.Count into select count(*) queries (assuming they're not loaded already). I'd like to know if it's capable of more robust things, like turning MyObject.ItsCollection.Any(o => o.Whatever == 5) into a SELECT ...EXISTS query. Section 18.1 of the docs only touches on it. I'm not an NH developer, so I can't really experiment with it and watch SQL Profiler without doing a bit of work getting everything set up; I'm just looking for some sort of reference describing what this feature is capable of. Thank you!

    Read the article

  • Using a table-alias in Kohana queries?

    - by Aristotle
    I'm trying to run a simple query with $this->db in Kohana, but am running into some syntax issues when I try to use an alias for a table within my query: $result = $this->db ->select("ci.chapter_id, ci.book_id, ci.chapter_heading, ci.chapter_number") ->from("chapter_info ci") ->where(array("ci.chapter_number" => $chapter, "ci.book_id" => $book)) ->get(); It seems to me that this should work just fine. I'm stating that "chapter_info" ought to be known as "ci," yet this isn't taking for some reason. The error is pretty straight-forward: There was an SQL error: Table 'gb_data.chapter_info ci' doesn't exist - SELECT `ci`.`chapter_id`, `ci`.`book_id`, `ci`.`chapter_heading`, `ci`.`chapter_number` FROM (`chapter_info ci`) WHERE `ci`.`chapter_number` = 1 AND `ci`.`book_id` = 1 If I use the full table name, rather than an alias, I get the expected results without error. This requires me to write much more verbose queries, which isn't ideal. Is there some way to use shorter names for tables within Kohana's query-builder?

    Read the article

  • ASP.NET cached aspx page & IIS logs

    - by Vishal Seth
    Hi guys, Is there any way to find out if ASP.Net runtime has served a cached copy of ASPX page or actually went through the page life cycle? Here is my problem: I'm seeing many entries in my IIS log files that were served successfully (200 OK). I've a corresponding logging code (Log4Net API) in the Session_Start and Application_BeginRequest() events that is logging every request to my DB with more details. I'm not seeing any corresponding entries in my SQL DB for some cases that should have been created by Log4Net code. Are there any logs available to find out if a cached copy was served by .NET worker process? Moreover, if my logging code would throw an exception, won't that show up as 500 in IIS logs? The code is on Windows 2008 Server, IIS 7.

    Read the article

  • Fluent NHibernate SchemaExport to SQLite not pluralizing Table Names

    - by weenet
    I am using SQLite as my db during development, and I want to postpone actually creating a final database until my domains are fully mapped. So I have this in my Global.asax.cs file: private void InitializeNHibernateSession() { Configuration cfg = NHibernateSession.Init( webSessionStorage, new [] { Server.MapPath("~/bin/MyNamespace.Data.dll") }, new AutoPersistenceModelGenerator().Generate(), Server.MapPath("~/NHibernate.config")); if (ConfigurationManager.AppSettings["DbGen"] == "true") { var export = new SchemaExport(cfg); export.Execute(true, true, false, NHibernateSession.Current.Connection, File.CreateText(@"DDL.sql")); } } The AutoPersistenceModelGenerator hooks up the various conventions, including a TableNameConvention like so: public void Apply(FluentNHibernate.Conventions.Instances.IClassInstance instance) { instance.Table(Inflector.Net.Inflector.Pluralize(instance.EntityType.Name)); } This is working nicely execpt that the sqlite db generated does not have pluralized table names. Any idea what I'm missing? Thanks.

    Read the article

  • Error in MySQL Workbench Forward Engineer Stored Procedures

    - by colithium
    I am using MySQL Workbench (5.1.18 OSS rev 4456) to forward engineer a SQL CREATE script. For every stored procedure, the automatic process outputs something like: DELIMITER // USE DB_Name// DB_Name// DROP procedure IF EXISTS `DB_Name`.`SP_Name` // USE DB_Name// DB_Name// CREATE PROCEDURE `DB_Name`.`SP_Name` (id INT) BEGIN SELECT * FROM Table_Name WHERE Id = id; END// The two lines that are simply the database name followed by the delimiter are errors and are reported as such when running the script. As long as they are ignored, it looks like everything gets created just fine. But why would it add those lines? I am creating the database in the WAMP environment which uses MySQL 5.1.36

    Read the article

  • What risks are there in using extracted PHP superglobals?

    - by Zephiro
    Hola usando estas funciones, que riesgo corro en tener problemas de seguridad, es necesesario usar extract() o hay alguna manera mejor de convertir las variables superglobales (array) en trozos de variables. Good, there is some risk in using the function extract in the superglobal variables as $_POS and $_GET, I work of the following way. There is risk of SQL INJECTION or there is an alternative to extract if ( get_magic_quotes_gpc() ) { $_GET = stripslashes( $_GET ); $_POST =stripslashes( $_POST ); } function vars_globals($value = '') { if (is_array ( $value )) $r = &$value; else parse_str ( $value, $r ); return $r; } $r = vars_globals( $_GET ); extract($r, EXTR_SKIP);

    Read the article

  • What database works well with 200+GB of data?

    - by taw
    I've been using mysql (with innodb; on Amazon rds) because it's sort of universal default, but it's been ridiculously under-performing, and tweaking it only delays the inevitable. The data is mostly relatively short (<1kB of bytes each) blobs information about 100Ms of urls. There is (or should be, mysql cannot seem to handle it) very high amount of insert / update / retrieve but few complex queries - not that complex queries wouldn't be useful, but because mysql is so slow that it's far faster to get the data out, process it locally, and cache the results somewhere. I can keep tweaking mysql and throwing more hardware at it, but it seems increasingly futile. So what are the options? SQL/relational model/etc. optional - anything will do as long as it's fast, networked, and language-independent.

    Read the article

  • Of transactions and Mongo

    - by Nuri Halperin
    Originally posted on: http://geekswithblogs.net/nuri/archive/2014/05/20/of-transactions-and-mongo-again.aspxWhat's the first thing you hear about NoSQL databases? That they lose your data? That there's no transactions? No joins? No hope for "real" applications? Well, you *should* be wondering whether a certain of database is the right one for your job. But if you do so, you should be wondering that about "traditional" databases as well! In the spirit of exploration let's take a look at a common challenge: You are a bank. You have customers with accounts. Customer A wants to pay B. You want to allow that only if A can cover the amount being transferred. Let's looks at the problem without any context of any database engine in mind. What would you do? How would you ensure that the amount transfer is done "properly"? Would you prevent a "transaction" from taking place unless A can cover the amount? There are several options: Prevent any change to A's account while the transfer is taking place. That boils down to locking. Apply the change, and allow A's balance to go below zero. Charge person A some interest on the negative balance. Not friendly, but certainly a choice. Don't do either. Options 1 and 2 are difficult to attain in the NoSQL world. Mongo won't save you headaches here either. Option 3 looks a bit harsh. But here's where this can go: ledger. See, and account doesn't need to be represented by a single row in a table of all accounts with only the current balance on it. More often than not, accounting systems use ledgers. And entries in ledgers - as it turns out – don't actually get updated. Once a ledger entry is written, it is not removed or altered. A transaction is represented by an entry in the ledger stating and amount withdrawn from A's account and an entry in the ledger stating an addition of said amount to B's account. For sake of space-saving, that entry in the ledger can happen using one entry. Think {Timestamp, FromAccountId, ToAccountId, Amount}. The implication of the original question – "how do you enforce non-negative balance rule" then boils down to: Insert entry in ledger Run validation of recent entries Insert reverse entry to roll back transaction if validation failed. What is validation? Sum up the transactions that A's account has (all deposits and debits), and ensure the balance is positive. For sake of efficiency, one can roll up transactions and "close the book" on transactions with a pseudo entry stating balance as of midnight or something. This lets you avoid doing math on the fly on too many transactions. You simply run from the latest "approved balance" marker to date. But that's an optimization, and premature optimizations are the root of (some? most?) evil.. Back to some nagging questions though: "But mongo is only eventually consistent!" Well, yes, kind of. It's not actually true that Mongo has not transactions. It would be more descriptive to say that Mongo's transaction scope is a single document in a single collection. A write to a Mongo document happens completely or not at all. So although it is true that you can't update more than one documents "at the same time" under a "transaction" umbrella as an atomic update, it is NOT true that there' is no isolation. So a competition between two concurrent updates is completely coherent and the writes will be serialized. They will not scribble on the same document at the same time. In our case - in choosing a ledger approach - we're not even trying to "update" a document, we're simply adding a document to a collection. So there goes the "no transaction" issue. Now let's turn our attention to consistency. What you should know about mongo is that at any given moment, only on member of a replica set is writable. This means that the writable instance in a set of replicated instances always has "the truth". There could be a replication lag such that a reader going to one of the replicas still sees "old" state of a collection or document. But in our ledger case, things fall nicely into place: Run your validation against the writable instance. It is guaranteed to have a ledger either with (after) or without (before) the ledger entry got written. No funky states. Again, the ledger writing *adds* a document, so there's no inconsistent document state to be had either way. Next, we might worry about data loss. Here, mongo offers several write-concerns. Write-concern in Mongo is a mode that marshals how uptight you want the db engine to be about actually persisting a document write to disk before it reports to the application that it is "done". The most volatile, is to say you don't care. In that case, mongo would just accept your write command and say back "thanks" with no guarantee of persistence. If the server loses power at the wrong moment, it may have said "ok" but actually no written the data to disk. That's kind of bad. Don't do that with data you care about. It may be good for votes on a pole regarding how cute a furry animal is, but not so good for business. There are several other write-concerns varying from flushing the write to the disk of the writable instance, flushing to disk on several members of the replica set, a majority of the replica set or all of the members of a replica set. The former choice is the quickest, as no network coordination is required besides the main writable instance. The others impose extra network and time cost. Depending on your tolerance for latency and read-lag, you will face a choice of what works for you. It's really important to understand that no data loss occurs once a document is flushed to an instance. The record is on disk at that point. From that point on, backup strategies and disaster recovery are your worry, not loss of power to the writable machine. This scenario is not different from a relational database at that point. Where does this leave us? Oh, yes. Eventual consistency. By now, we ensured that the "source of truth" instance has the correct data, persisted and coherent. But because of lag, the app may have gone to the writable instance, performed the update and then gone to a replica and looked at the ledger there before the transaction replicated. Here are 2 options to deal with this. Similar to write concerns, mongo support read preferences. An app may choose to read only from the writable instance. This is not an awesome choice to make for every ready, because it just burdens the one instance, and doesn't make use of the other read-only servers. But this choice can be made on a query by query basis. So for the app that our person A is using, we can have person A issue the transfer command to B, and then if that same app is going to immediately as "are we there yet?" we'll query that same writable instance. But B and anyone else in the world can just chill and read from the read-only instance. They have no basis to expect that the ledger has just been written to. So as far as they know, the transaction hasn't happened until they see it appear later. We can further relax the demand by creating application UI that reacts to a write command with "thank you, we will post it shortly" instead of "thank you, we just did everything and here's the new balance". This is a very powerful thing. UI design for highly scalable systems can't insist that the all databases be locked just to paint an "all done" on screen. People understand. They were trained by many online businesses already that your placing of an order does not mean that your product is already outside your door waiting (yes, I know, large retailers are working on it... but were' not there yet). The second thing we can do, is add some artificial delay to a transaction's visibility on the ledger. The way that works is simply adding some logic such that the query against the ledger never nets a transaction for customers newer than say 15 minutes and who's validation flag is not set. This buys us time 2 ways: Replication can catch up to all instances by then, and validation rules can run and determine if this transaction should be "negated" with a compensating transaction. In case we do need to "roll back" the transaction, the backend system can place the timestamp of the compensating transaction at the exact same time or 1ms after the original one. Effectively, once A or B visits their ledger, both transactions would be visible and the overall balance "as of now" would reflect no change.  The 2 transactions (attempted/ reverted) would be visible , since we do actually account for the attempt. Hold on a second. There's a hole in the story: what if several transfers from A to some accounts are registered, and 2 independent validators attempt to compute the balance concurrently? Is there a chance that both would conclude non-sufficient-funds even though rolling back transaction 100 would free up enough for transaction 117 (some random later transaction)? Yes. there is that chance. But the integrity of the business rule is not compromised, since the prime rule is don't dispense money you don't have. To minimize or eliminate this scenario, we can also assign a single validation process per origin account. This may seem non-scalable, but it can easily be done as a "sharded" distribution. Say we have 11 validation threads (or processing nodes etc.). We divide the account number space such that each validator is exclusively responsible for a certain range of account numbers. Sounds cunningly similar to Mongo's sharding strategy, doesn't it? Each validator then works in isolation. More capacity needed? Chop the account space into more chunks. So where  are we now with the nagging questions? "No joins": Huh? What are those for? "No transactions": You mean no cross-collection and no cross-document transactions? Granted - but don't always need them either. "No hope for real applications": well... There are more issues and edge cases to slog through, I'm sure. But hopefully this gives you some ideas of how to solve common problems without distributed locking and relational databases. But then again, you can choose relational databases if they suit your problem.

    Read the article

  • How to setup testing LAMP environment to work with outsourcing companies?

    - by Kelvin
    Hello Guys, I need to setup testing LAMP environment in my office to work with outsourcing companies. This is what I think should be done on my side: Setup testing web server with the same configuration as on production Setup testing SQL server with "fake data"? Outsourcers should have access only to some part of original code Outsourcers should use CVS to update their code Once testing is finished someone releases the update ............ How would you separate original code and database from testing environment, but keep it as close as possible to production? What is the general practice for setting up testing environment and how other companies deal with outsourcers? I will appreciate for any of your thoughts and ideas from your personal experience. Maybe someone can suggest some article on this topic. Thank you a lot!

    Read the article

  • Back Orders for ERP: data model references ?

    - by Patrick Honorez
    I have built an ERP using Sql Server as a back-end. These are the different types of Client documents (there are also Supplier Docs): Order -- impact: BO Delivery Note (also used for returns, with negative quantity) --impact: BO, Stock Invoice --impact: accounting only Credit Note --impact: accounting, BO I use a complex system of self joins (at detail level) to find out the quantities in each OrderDetail that still have a backorder (BO). It'd like to simplify this using a [group] field that could be used through all detail line related to an original order. There are many difficult things to trace: a Return of a product may be due to a defect and thus increase the BO, or it can be just a return, joined with a Credit Note, and then has no impact on BO. My question is: do you know of any real good reference (book, web) for this matter ?

    Read the article

  • Keeping a database structure up to date in a project where code is on subversion?

    - by Bruno De Barros
    I have been working with Subversion for a while now, and it's been incredible for the management of my projects, and even to help managing the deployment to several different servers, but there is just the one thing that still annoys me. Whenever I make any changes to the database structure, I need to update every server manually, I have to keep track of any changes I made, and because some of my servers run branches of the project (modifications that are still being worked on, or were made for different purposes), it's a bit awkward. Until now, I've been using a "database.sql" file, which is a dump of the database structure for a specific revision. But it just seems like such a bad way to manage this. And I was wondering, how does everyone else manage their MySQL databases when they're working on a project and using Subversion?

    Read the article

  • Virtual Machine Performance - More RAM or More Processor?

    - by webworm
    When looking to improve Virtual Machine performance what would be better ... Increasing the available RAM or increasing the processor power? Here is my choice ... Core 2 Duo @ 2.4 GHz with 8 GB RAM and integrated graphics (Mac Book Pro 13") Core i7 @ 2.6 GHz with 4 GB RAM and 512 MB dedicated graphics (Mac Book Pro 15") I plan to run Windows x64 in the VM with SQL Server 2008, Visual Studio 2010, and SharePoint 2010. I am planning to run VMWare Fusion v3. I also didn't know if a dedicated graphics card makes a difference when using a Virtual Machine. Thank you.

    Read the article

  • I can't create a view in oracle database using sqlplus (insufficient privileges)

    - by Nubkadiya
    I'm running this SQL: CREATE VIEW showMembersInfo(MemberID,Fname,Lname,Address,DOB,Telephone,NIC,Email,WorkplaceID,WorkName,WorkAddress,WorkTelephone,StartingDate,ExpiryDate,Amount,WitnessID,WitName,WitAddress,WitNIC,WitEmail,WitTelephone) AS SELECT mem.MemberID,mem.FirstName,mem.LastName,mem.Address,mem.DOB,mem.Telephone,mem.NIC,mem.Email, wrk.WorkPlaceID,wrk.Name,wrk.Address,wrk.Telephone, anl.StartingDate,anl.ExpiryDate,anl.Amount, wit.WitnessID,wit.Name,wit.Address,wit.NIC,wit.Email,wit.Telephone FROM Member mem, WorkPlace wrk, AnnualFees anl, Witness wit WHERE mem.MemberID = anl.MemberID AND mem.WorkPlaceID = work.WorkPlaceID AND mem.WitnessID = wit.WitnessID When I try to create the view I get this error: ERROR at line 1: ORA-01031: insufficient privileges Why is that? I'm logged in to sqlplus using sysman

    Read the article

  • Is there a way to formerly define a time interval for configuring a process?

    - by gshauger
    Horrible worded question...I know. I'm working on an application that processes data for the previous day. The problem is that I know the customer is going to eventually ask to it for every hour or some other arbitrary time interval. I know that languages such as Java or SQL have masks for defining dates. Well what about a way to define a time interval? Let me ask it this way. If someone asked you to create a configurable piece of software how would you allow the user to specify the time intervals?

    Read the article

  • mySQL Left Join on multiple tables

    - by Jarrod
    Hi I'm really struggling with this query. I have 4 tables (http://oberto.co.nz/db-sql.png): Invoice_Payement, Invoice, Client and Calendar. I'm trying to create a report by summing up the 'paid_amount' col, in Invoice_Payment, by month/year. The query needs to include all months, even those with no data There query needs the condition (Invoice table): registered_id = [id] I have tried with the below query, which works, but falls short when 'paid_date' does not have any records for a month. The outcome is that month does not show in the results I added a Calendar table to resolved this but not sure how to left join to it. SELECT MONTHNAME(Invoice_Payments.date_paid) as month, SUM(Invoice_Payments.paid_amount) AS total FROM Invoice, Client, Invoice_Payments WHERE Client.registered_id = 1 AND Client.id = Invoice.client_id And Invoice.id = Invoice_Payments.invoice_id AND date_paid IS NOT NULL GROUP BY YEAR(Invoice_Payments.date_paid), MONTH(Invoice_Payments.date_paid) Please see the above link for a basic ERD diagram of my scenario. Thanks for reading. I've posted this Q before but I think I worded it badly.

    Read the article

  • How do I consume a COM+ local server from C#?

    - by Mystere Man
    I have a web application from a company that has gone out of business. We're looking to extend the web app a bit with some asp.net functionality. The web app was written as an ISAPI application in Delphi, and uses COM+ to talk to the SQL Server and handles things like session management and authentication. So, in order to get the current user and other details, I have to use the undocument COM+ components. I was able to dig out the type library and auto generated IDL, but at this point i'm lost in creating a .NET proxy class for this. Is there a way to autogenerate the .net COM+ proxy either from the .dll itself (extracting the typelib info) or from the IDL? Note: These seem to be simple COM style objects hosted in COM+ servers, no subscriptions or transaction monitoring..

    Read the article

  • Best toolkit for new Web application (odd requirements)

    - by ireadit
    I need to write a new application, and have no experience with any new technology, framework, or language. Here are the requirements: HTML front end (best if it's cross-browser friendly) Web deployable, but also ideally want to be able to install as standalone on a desktop SQL Server database Ideally, would like to use a good (and easy) AJAX toolkit with widgets Ideally, would like to be able to write in ASP.Net but later (or concurrently) also write in Java. This is a big concern, as I would like to not have to rewrite the whole thing. Is there a toolkit I can use that makes this cross-platform requirement easier? All suggestions and comments are much appreciated. Thank you.

    Read the article

  • Image gallery control for ASP.Net with filtering capabilities

    - by ks78
    I'm writing an ASP.Net application which needs to display a large number of thumbnails, preferably in a paginated format. These thumbnails will be stored on the server's hard disk, but will have their filenames listed in a SQL Server database. What I want to do is to be able to filter the images being displayed based on criteria within the database. I've looked at the NotesForGallery control, which I really like, but it doesn't seem to have a way to do that. --if I'm wrong, please correct me. Are there any other image gallery type controls, preferably free, that can do what I need? I'm hoping someone can recommend a control or solution that will point me in the right direction. Thanks in advance.

    Read the article

  • How to create a custom ADO Multi Demensional Catalog with no database

    - by Alan Clark
    Does anyone know of an example of how to dynamically define and build ADO MD (ActiveX Data Objects Multidimensional) catalogs and cube definitions with a set of data other than a database? Background: we have a huge amount of data in our application that we export to a database and then query using the usual SQL joins, groups, sums etc to produce reports. The data in the application is originally in objects and arrays. The problem is the amount of data is so large the export can take 2 hours. So I am trying to figure out a good way of querying the objects in memory, either by a custom OLAP algorithm or library, or ADO MD. But I haven't been able to find an example of using ADO MD without a database behind it. We are using Delphi 2010 so would use ADO ActiveX but I imagine the ADO.NET MD is similar. I realize that if the application data was already stored in a database the problem would solve itself. Also if Delphi had LINQ capability I could query the objects and arrays that way.

    Read the article

  • How Manage Big Linq DataContext ?

    - by Rev
    Hi The major problem in .net programs is "How manage memory for best performance". so Microsoft use garbage collector in .net and with that, we don't need to do something for managing memory(or better say we can use GC easily) But when you develop big project(business app), you make too many tables and database for your own project. so if you use Linq-to-sql, we must build DataContext include hundred or more tables. That make problem for program when you create an object from datacontext, that object give big amount of memory. also we cant divide datacontext to datacontexts(cuz relation between tables) so "How manage datacontext and memory"?

    Read the article

< Previous Page | 1011 1012 1013 1014 1015 1016 1017 1018 1019 1020 1021 1022  | Next Page >