Search Results

Search found 34699 results on 1388 pages for 'database backup'.

Page 511/1388 | < Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >

  • One big call vs. multiple smaller TSQL calls

    - by BrokeMyLegBiking
    I have a ADO.NET/TSQL performance question. We have two options in our application: 1) One big database call with multiple result sets, then in code step through each result set and populate my objects. This results in one round trip to the database. 2) Multiple small database calls. There is much more code reuse with Option 2 which is an advantage of that option. But I would like to get some input on what the performance cost is. Are two small round trips twice as slow as one big round trip to the database, or is it just a small, say 10% performance loss? We are using C# 3.5 and Sql Server 2008 with stored procedures and ADO.NET.

    Read the article

  • Output problem in mysql query in MFC program

    - by D.Gaughan
    Im currently working on a small MFC program that outputs data from a mysql database. I can get output when im using an sql statement that does not contain any variable eg. select album from Artists; but when i try to use a variable the program compiles but i get no output eg. mysql_perform_query(conn,select album from Artists where artists = '"+m_search_edit"'") Here is the function for mysql_perform_query: MYSQL_RES* mysql_perform_query(MYSQL *conn, const char* query) { // send the query to the database if (mysql_query(conn, query)) { // printf("MySQL query error : %s\n", mysql_error(conn)); // exit(1); } return mysql_use_result(conn); } And here is the code block for outputting the data: struct connection_details mysqlD; mysqlD.server = "www.freesqldatabase.com"; // where the mysql database is mysqlD.user = "**********"; // the root user of mysql mysqlD.password = "***********"; // the password of the root user in mysql mysqlD.database = "***************"; // the databse to pick // connect to the mysql database conn = mysql_connection_setup(mysqlD); CStringA query; query.Format("select album from Artists where artist = '%s'", CT2CA(m_search_edit)); res = mysql_perform_query(conn, query); //res = mysql_perform_query (conn, "select distinct artist from Artists"); while((row = mysql_fetch_row(res)) != NULL){ CString str; UpdateData(); str = ("%s\n", row[0]); UpdateData(FALSE); m_list_control.AddString(str); } The m_search_edit variable is the variable for an edit box. I am using Visual Studio 2008 with one copy of this program unicode and one nonunicode, I also have a version built with VC++ 6. Any tips on how I can get output from the databse using the m_search_edit variable??

    Read the article

  • Steps to install solely ubuntu 13.04 on Dell inspiron 14z ultrabook with SSD+HDD

    - by rishy
    I have tried a few things like disabling the Intel smart response, choosing AHCI in BIOS. But there are certain problems I am still facing. I can't see my SSD during the installation of ubuntu (I am planning to install Ubuntu on my SSD and other files on HDD). When I run Ubuntu my laptop gets overheated and battery backup reduces to 90 minutes. (I guess it's related to my graphic driver ATI Raedon HD 7570). Cooling fan seems to run at its fullest, it was working much better in windows. So, overall I wanted to know what are the exact steps I need to follow to install Ubuntu on my SSD and then use my HDD to keep other files, How can I get rid of overheating and battery backup problem?

    Read the article

  • rsync to ONLY keep files in destination that have been removed from source

    - by David Corley
    We use rsync to copy filesystem contents from one machine to another as a backup. We first run MACHINE-X-MACHINE-Y rsync for a straight backup with the --delete and --delete-excluded switches We also run an internal Rsync between the MACHINE-Y destination, and another folder on MACHINE-Y with either of the delete flags. This maintains a non-destructive copy in the event someone inadvertently deletes a file on MACHINE-X. However, it also has the overhead of being a complete copy of what has already been synchronized. Ideally I want to be able to run the non-destructive rsync in such a way that the destination ONLY receives the deleted files and so avoids unnecessary duplication . Is there any way to do this?

    Read the article

  • How to show all the tables from multiple databases

    - by saorabh
    How to select all the tables from multiple databases in mySql.. I am doing the following steps but not able to achive the goal. <?php $a = "SHOW DATABASES"; $da = $wpdb->get_results($a); foreach($da as $k){ echo '<pre>'; print_r ($k->Database);//prints all the available databases echo '</pre>'; $nq = "USE $k->Database";//trying to select the individual database $newda = $wpdb->get_results($nq); $alld = "SELECT * FROM $k->Database"; $td = $wpdb->get_results($alld); var_dump($td);//returns empty array } ?> Please Help me

    Read the article

  • vs2010 Cache SQL data incorrect fields

    - by mickartz
    OK, I found a walkthrough on msdn for what I was after (offline database cache). However when I let the wizard create a local database from my online sql server the timespan fields are converted to a string?? Now I know the suggestion was to create my own local database and then use the MS Synch framework...however...this proclaims to do it "out of the box" However now I've a dataset which I've no idea how to use, and a database newly formed (for the synched cache) taht I will have to use Ling to Entities with(??) meanwhile I have this weird timespan to string conversion? should I give up now or push on? can i overwrite the the .designer.cs? typeof(string) to typeof(timespan)? damn wizards!!

    Read the article

  • Repair BAD Sectors or Buy a new HDD?

    - by Nehal J. Wani
    I have a Seagate internal hard disk drive. I recently opened up my laptop [Dell Inspiron N5010] [Warranty has expired], cleaned it and it worked normally after waking up from hibernation. However, when I restarted it, it stuck on windows loading screen, then tried to boot from Dell recovery partition but failed. It gave the error: Windows has encounter a problem communicating with a device connected to your computer. This error can be caused by unplugging a removable storage device such as an external USB drive while the device is in use, or by faulty hardware such as a hard drive or CD-ROM drive that is failing. Make sure any removable storage is properly connected and then restart your computer If you continue to receive this error message, contact the hardware manufacturer. Status: 0xc00000e9 Info: An unexpected I/O error has occurred. While cleaning, I had mistakenly touched the round silvery thing at the bottom of the HDD. I don't know whether this has caused the problem or not. Since I have Fedora also installed in the same HDD, I can boot from it but it shows weird read errors when I ask it to mount Windows partitions. The disk utility also says that the Hard Disk has many bad sectors and needs to be replaced. I downloaded Seatools from Seagate website and used it. In the long test, I gave it permission to repair the first 100 errors which it did successfully. Now I am confused at what I should do. Internal Hard Disk Costs: a. Internal HDD 500GB Costs: Rs3518 b.1 External HDD 500GB Costs: Rs3472 b.2 External HDD 1TB Costs: Rs5500 c. Internal to External Converter Costs: Rs650 I have the following options: (i) Buy an External HDD, backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to buy another internal HDD and replace the damaged one. OR break the seal of the external one and put it inside my laptop as internal. Breaking the case involves risks. (ii) Buy a Internal HDD and an Internal to External Converter Case [Not very reliable], backup my data. Try to repair bad sectors of HDD. Then two cases arise: (a) My Internal HDD gets repaired [almost] (b) My internal HDD doesn't get repaired. Then I need to just put in the new internal HDD I just bought. Experts, please guide me as to what will be the most VFM option? Also, if a HDD is failing, is it that I shouldn't read from it too otherwise there is a chance of other sectors failing? What I mean is, is it wrong to read from the HDD without taking backup first?

    Read the article

  • About Hard Disk Drive Docks

    - by Crossbrowser
    I'm thinking of buying a drive dock to put my unused large HDD to use. I will also probably use the dock to backup files and swap the drives regularly. I have a few questions though: Are they noisy? I plan to use them via USB (because I don't think I have eSata connectors), am I gonna want to kill myself every time I backup? (I know it's supposed to be 480 Mbps, but how realistic is this?) Do you recommend a particular model? (I was thinking about this Startech HDD dock) Thank you

    Read the article

  • Id property not populated

    - by fingers
    I have an identity mapping like so: Id(x => x.GuidId).Column("GuidId") .GeneratedBy.GuidComb().UnsavedValue(Guid.Empty); When I retrieve an object from the database, the GuidId property of my object is Guid.Empty, not the actual Guid (the property in the class is of type System.Guid). However, all of the other properties in the object are populated just fine. The database field's data type (SQL Server 2005) is uniqueidentifier, and marked as RowGuid. The application that is connecting to the database is a VB.NET Web Site project (not a "Web Application" or "MVC Web Application" - just a regular "Web Site" project). I open the NHibernate session through a custom HttpModule. Here is the HttpModule: public class NHibernateModule : System.Web.IHttpModule { public static ISessionFactory SessionFactory; public static ISession Session; private static FluentConfiguration Configuration; static NHibernateModule() { if (Configuration == null) { string connectionString = cfg.ConfigurationManager.ConnectionStrings["myDatabase"].ConnectionString; Configuration = Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .ExposeConfiguration(c => c.Properties.Add("current_session_context_class", "web")) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadMap>().ExportTo("C:\\Mappings")); } SessionFactory = Configuration.BuildSessionFactory(); } public void Init(HttpApplication context) { context.BeginRequest += delegate { Session = SessionFactory.OpenSession(); CurrentSessionContext.Bind(Session); }; context.EndRequest += delegate { CurrentSessionContext.Unbind(SessionFactory); }; } public void Dispose() { Session.Dispose(); } } The strangest part of all, is that from my unit test project, the GuidId property is returned as I would expect. I even rigged it to go for the exact row in the exact database as the web site was hitting. The only differences I can think of between the two projects are The unit test project is in C# Something with the way the session is managed between the HttpModule and my unit tests The configuration for the unit tests is as follows: Fluently.Configure() .Database(MsSqlConfiguration.MsSql2005.ConnectionString(cs => cs.Is(connectionString))) .Mappings(x => x.FluentMappings.AddFromAssemblyOf<LeadDetailMap>()); I am fresh out of ideas. Any help would be greatly appreciated. Thanks

    Read the article

  • Emacs: Changing the location of auto-save files

    - by Dominic Rodger
    I've currently got: (setq backup-directory-alist `((".*" . ,temporary-file-directory))) (setq auto-save-file-name-transforms `((".*" ,temporary-file-directory t))) in my .emacs, but that doesn't seem to have changed where auto-save files get saved (it has changed where backup files get saved. M-x describe-variable shows that temporary-file-directory is set to /tmp/, but when I edit a file called testing.md and have unsaved changes, I get a file called .#testing.md in the same directory. How can I make that file go somewhere else (e.g. /tmp/)? I've had no luck with these suggestions, so any suggestions welcome! If it helps, I'm on GNU Emacs 23.3.1, running Ubuntu.

    Read the article

  • IT merger - self-sufficient site with domain controller VS thin clients outpost with access to termi

    - by imagodei
    SITUATION: A larger company acquires a smaller one. IT infrastructure has to be merged. There are no immediate plans to change the current size or role of the smaller company - the offices and production remain. It has a Win 2003 SBS domain server, Win 2000 file server, linux server for SVN and internal Wikipedia, 2 or 3 production machines, LTO backup solution. The servers are approx. 5 years old. Cisco network equippment (switches, wireless, ASA). Mail solution is a hosted Exchange. There are approx. 35 desktops and laptops in the company. IT infrastructure unification: There are 2 IT merging proposals. 1.) Replacing old servers, installing Win Server 2008 domain controller, and setting up either subdomain or domain trust to a larger company. File server and other servers remain local and synchronization should be set up to a centralized location in larger company. Similary with the backup - it remains local and if needed it should be replicated to a centralized location. Licensing is managed by smaller company. 2.) All servers are moved to a centralized location in larger company. As many desktop machines as possible are replaced by thin clients. The actual machines are virtualized and hosted by Terminal server at the same central location. Citrix solutions will be used. Only router and site-2-site VPN connection remain at the smaller company. Backup internet line to insure near 100% availability is needed. Licensing is mainly managed by larger company. Only specialized software for PCs that will not be virtualized is managed by smaller company. I'd like to ask you to discuss both solutions a bit. In your opinion, which is better from the operational point of view? Which is more reliable, cheaper in the long run? Easier to manage from the system administrator's point of view? Easier on the budget and easier to maintain from IT department's point of view? Does anybody have any experience with the second option and how does it perform in production environment? Pros and cons of both? Your input will be of great significance to me. Thank you very much!

    Read the article

  • How to create a non-persistent Elixir/SQLAlchemy object?

    - by siebert
    Hi, because of legacy data which is not available in the database but some external files, I want to create a SQLAlchemy object which contains data read from the external files, but isn't written to the database if I execute session.flush() My code looks like this: try: return session.query(Phone).populate_existing().filter(Phone.mac == ident).one() except: return self.createMockPhoneFromLicenseFile(ident) def createMockPhoneFromLicenseFile(self, ident): # Some code to read necessary data from file deleted.... phone = Phone() phone.mac = foo phone.data = bar phone.state = "Read from legacy file" phone.purchaseOrderPosition = self.getLegacyOrder(ident) # SQLAlchemy magic doesn't seem to work here, probably because we don't insert the created # phone object into the database. So we set the id fields manually. phone.order_id = phone.purchaseOrderPosition.order_id phone.order_position_id = phone.purchaseOrderPosition.order_position_id return phone Everything works fine except that on a session.flush() executed later in the application SQLAlchemy tries to write the created Phone object to the database (which fortunatly doesn't succeed, because phone.state is longer than the data type allows), which breaks the function which issues the flush. Is there any way to prevent SQLAlchemy from trying to write such an object? Ciao, Steffen

    Read the article

  • How to use SQL - INSERT...ON DUPLICATE KEY UPDATE?

    - by Probocop
    Hi, I have a script which captures tweets and puts them into a database. I will be running the script on a cronjob and then displaying the tweets on my site from the database to prevent hitting the limit on the twitter API. So I don't want to have duplicate tweets in my database, I understand I can use 'INSERT...ON DUPLICATE KEY UPDATE' to achieve this, but I don't quite understand how to use it. My database structure is as follows. Table - Hash id (auto_increment) tweet user user_url And currently my SQL to insert is as follows: $tweet = $clean_content[0]; $user_url = $clean_uri[0]; $user = $clean_name[0]; $query='INSERT INTO hash (tweet, user, user_url) VALUES ("'.$tweet.'", "'.$user.'", "'.$user_url.'")'; mysql_query($query); How would I correctly use 'INSERT...ON DUPLICATE KEY UPDATE' to insert only if it doesn't exist, and update if it does? Thanks

    Read the article

  • Problems restoring old backups in NetBackup 6.5

    - by gharper
    I had a server that was decommissioned & replaced last year, and since the server was no longer in use, I deleted it's client & backup policy from the NetBackup Admin Console shortly afterwards. I recently got a request to restore a file from the old server, however when I specify the source client for the restore, I get an error message saying: WARNING: server (backupserver) does not contain any backups for client (oldserver) using the specified policy type (Standard) as requested by client (backupserver). [Ok] In addition to that error, I can't seem to run a Client Backup report on the old client any more to determine what tapes I need to recall in order to re-index and restore the files... My questions: Does deleting the client somehow remove NetBackups ability to ever restore files from the old system, even if the backups have a retention period of infinity? Is there a way to restore the file from the tape, assuming I can figure out which tape I need?

    Read the article

  • Mapping relationships from multiple databases in NHibernate

    - by mannish
    I have a multi-database application configured with NHibernate. The entities that correspond to tables from each database are in their own separate assemblies (an assembly per database if you will). I have a need/desire to relate an entity from one database to an entity of another database. Everything up to this point works as I want it to (the application handles multiple session factories, etc.). The relationship I want is many-to-one, but in reality my application only cares about one side of the relationship (for reasons that aren't relevant). The relevant entities are Project and PMProject, where a Project HAS A PMProject. When I map the many-to-one, I get the following error: NHibernate.MappingException: An association from the table PROJECTS refers to an unmapped class: SDMS.PPRM.PMProject The Project mapping itself reads (ignore the funky column naming; it's an Oracle db): <many-to-one name="PMProject" class="SDMS.PPRM.PMProject" column="PM_PROJECT_ID" cascade="none" /> In the class attribute, I'm referencing the appropriate assembly, but I get that error which seems to tell me it simply can't find the mapping file for PMProject. But that file exists (it's set as embedded resource), the session factory instantiation works without fail; so I'm at a loss on how to tell the Project mapping how/where to look for the appropriate mapping. Is there something I'm missing? A better way to go about this? Thanks in advance.

    Read the article

  • How to have Xcode find the newest version of a file

    - by Arian
    Currently I have a SQLite database, that is set statically to use database01.sqlite... but what I need is a way to have the file path find the newest version of the database file that exists. For example: If a database file of database04.sqlite is available, it should use that one instead. Below is my current code: NSString *databaseDirectory = [NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES) objectAtIndex:0]; NSString *filePath = [databaseDirectory stringByAppendingPathComponent:@"database01.sqlite"];

    Read the article

  • How can I restrict the backuppc client user as much as possible? (rsync)

    - by jxn
    I have backuppc making full backups of servers, but I'd like to be sure that my set up is as paranoid as possible. BackupPC is set up to backup via rsync, and it is set up to use a specific user on each client to be backed up. Because the backuppc client user has to have access to every file on the client machine and the ability to ssh into the machine without an interactive password, I'm a little nervous about securing the clients, and I'd like to know I haven't overlooked any options. Here's what I have in place: in the client user's authorized_keys file, i've included from="IPTOSERVER",command="/usr/bin/rsync" before the user's public key, so that the user can only login coming from the BackupPC server. Next, in the sudoers file, I've added this line: backuppc ALL=NOPASSWD: /usr/bin/rsync to allow root-level permissions only for the rsync command for that user. Are there other user, policy, or ssh restrictions that I can add while still allowing the backup pc client user to rsync all files?

    Read the article

  • Applescript create event in calendar, how do I remove the default alert?

    - by zero0cool
    Running 10.8 Mountain Lion, I'm trying to create a new event with Applescript like this: set theDate to (current date) tell application "Calendar" tell calendar "Calendar" set timeString to time string of theDate set newEvent to make new event at end with properties {description:"Last Backup", summary:"Last Backup " & timeString, location:"To a local unix system", start date:theDate, end date:theDate + 15 * minutes, allday event:false, status:confirmed} tell newEvent delete every display alarm delete every sound alarm delete every mail alarm delete every open file alarm end tell end tell end tell However, this does not remove the default Calendar alert which one can set through Calendar preferences (30 minutes prior in my case). How do I create an event with no alarms at all through Applescript?

    Read the article

  • How can I get DocId when adding a document in Lucene index?

    - by Rohit
    I am indexing a row of data from database in Lucene.Net. A row is equivalent of Document. I want to update my database with the DocId, so that I can use the DocId in the results to be able to retrieve rows quickly. I currently first retrive the PK from the result docs which I think should be slower than retriving directly from the database using DocId. How can I find the DocId when adding a document to Lucene?

    Read the article

  • EXCEL import to sql returning NULL for decimals when in VARCHAR data type

    - by Daniel
    Hi, I am working on a peice of software which has expodentially grown over the last few years and the database needs to be regularly updated. Customers are providing us with data now on large spreadsheets which we format and will start importing into the database. I am using the Import and Export Data (32-bit) Wizard. One column in the database contains values like '1.1.1.2' etc and i am importing them in as a Varchar as that is the data type in the database. However, for values like '8.5', 'NULL' is getting imported insead. It only occurs when there is one decimal point. Is this a formatting error with excel or is it the wrong datatype?

    Read the article

  • Ruby On Rails : db:migrate do not run

    - by user332219
    Hi, When i run this command i have an error : rake db:migrate --trace rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' see the log file here : http://patxi.mayol.free.fr/public/trace.txt my database.yml file is here : MySQL. Versions 4.1 and 5.0 are recommended. # development: adapter: mysql encoding: utf8 reconnect: false database: annuaire_development pool: 5 username: root password: host: 127.0.0.1 test: adapter: mysql encoding: utf8 reconnect: false database: annuaire_test pool: 5 username: root password: host: 127.0.0.1 production: adapter: mysql encoding: utf8 reconnect: false database: annuaire_production pool: 5 username: root password: host: 127.0.0.1 Thanks My config : WampServer 2.0 with MySQL 5.0.51a Aptana 2.0.4 Ruby 1.8.5 Gems: actionmailer (2.3.4, 2.3.2) actionpack (2.3.4, 2.3.2) activerecord (2.3.4, 2.3.2) activeresource (2.3.4, 2.3.2) activesupport (2.3.4, 2.3.2) cgi_multipart_eof_fix (2.5.0) fastthread (1.0.1) gem_plugin (0.2.3) linecache (0.43) mongrel (1.1.5) mysql (2.8.1, 2.7.3) rack (1.0.0) rails (2.3.4, 2.3.2) rake (0.8.7) ruby-debug-base (0.10.3) ruby-debug-ide (0.4.5) sqlite3-ruby (1.2.1)

    Read the article

  • Error when creating an image from a UIView

    - by Raphael Pinto
    Hi, I'm creating an image from a view whith the folowing code : UIGraphicsBeginImageContext(myView.bounds.size); [myView.layer renderInContext:UIGraphicsGetCurrentContext()]; UIImage *viewImage = UIGraphicsGetImageFromCurrentImageContext(); UIGraphicsEndImageContext(); UIImageWriteToSavedPhotosAlbum(viewImage, nil, nil, nil); The image is created in the user album but in the consol I get this : 2010-05-11 10:17:17.974 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.014 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3d87f 0x34e3b029 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.077 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.087 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b053 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.091 myApp[3875:1807] sqlite error 8 [attempt to write a readonly database] 2010-05-11 10:17:18.095 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b085 0x34e3b22b 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.111 myApp[3875:1807] sqlite error 1 [SQL logic error or missing database] 2010-05-11 10:17:18.115 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3d87f 0x34e3b029 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.120 myApp[3875:1807] sqlite error 1 [SQL logic error or missing database] 2010-05-11 10:17:18.124 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b053 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 0x3034a12c) 2010-05-11 10:17:18.129 myApp[3875:1807] sqlite error 1 [cannot commit - no transaction is active] 2010-05-11 10:17:18.133 myApp[3875:1807] Backtrace for sqlite error: (0x34e3a909 0x34e3b085 0x34e3b4dd 0x34e2f48b 0x34e2dccb 0x34e2d2f7 0x34e2d3bb 0x34e4dcd3 0x34e50515 0x34e51351 0x34e51681 0x34e513f5 0x34e511e3 0x303af57c 0x34e51163 0x303b06e8 0x303ac8e0 0x303ac6d8 0x303ac8c8 0x303aca80 0x30350d55 I don't know where the problem come from. Can you help me?

    Read the article

  • In Entity framework, we can use Model first, DB first, Code first but how can we create table programmatically

    - by AukI
    In entity framework we can use 3 approaches model first , code first , database first but each one of them needs manual hand touch(means creating database or create model or write the POCO class codes or entity class codes) before proceeding to the next step ( using EF in context ). What if I want to create database and tables and table relationships programatically and still want to have to features of EntityFramework 4.3. To be more specific , from this example http://support.microsoft.com/kb/307283 we can create database , tables and everything using SQL command but we can't have the advantages of entity framework. So if we want to have that what should we do?

    Read the article

  • How do I cache query results using LINQ?

    - by Vince
    Hi, Is there any way to cache LINQ to SQL queries by looking at the parameters that were previously passed and bypass the database all together? I know L2S caches some database calls, but I'm looking for a permanant solution as in, even if the applciation restarts, that cache reloads and never asks the database again. Are there any frameworks for C#?

    Read the article

< Previous Page | 507 508 509 510 511 512 513 514 515 516 517 518  | Next Page >