Search Results

Search found 31356 results on 1255 pages for 'database backups'.

Page 595/1255 | < Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >

  • Can you update/add records in SQL using a datagridview and LINQ to SQL

    - by Jordan S
    Is it possible to bind a DataGridView to a LINQ to SQL class so that when I make changes to the records in the datagridview it automatically updates the SQL database? I have tried binding the data like this but if I make changes to the data in the datagrid view they do not actually affect the data in the database... BOMClassesDataContext DB = new BOMClassesDataContext(); var mfrs = from m in DB.Manufacturers select m; BindingSource bs = new BindingSource(); bs.DataSource = mfrs; dataGridView1.DataSource = bs;

    Read the article

  • LISP: Keyword parameters, supplied-p

    - by echox
    At the moment I'm working through "Practical Common Lisp" from Peter Seibel. In the chapter "Practical: A Simple Database" (http://www.gigamonkeys.com/book/practical-a-simple-database.html) Seibel explains keyword parameters and the usage of a supplied-parameter with the following example: (defun foo (&key a (b 20) (c 30 c-p)) (list a b c c-p)) Results: (foo :a 1 :b 2 :c 3) ==> (1 2 3 T) (foo :c 3 :b 2 :a 1) ==> (1 2 3 T) (foo :a 1 :c 3) ==> (1 20 3 T) (foo) ==> (NIL 20 30 NIL) So if I use &key at the beginning of my parameter list, I have the possibility to use a list of 3 parameters name, default value and the third if the parameter as been supplied or not. Ok. But looking at the code in the above example: (list a b c c-p) How does the lisp interpreter know that c-p is my "supplied parameter"?

    Read the article

  • Strategy Design Pattern -- *dynamic* !!!

    - by alexeypro
    My application will have different strategies for my objects. What's the best way of implementing that? I would really love the case when we can make strategy classes implementation dynamically loaded from, say, some relational database. Not sure how do that better, though. What's the best approach? Idea is that say we want to apply to object MyObj strategy Strategy123 then we just load from database by ID 123 the object, deserialize it, get the Strategy class, and use it with MyObj. The maintenance while sounds easier from the first look can be a pain in the long run if Strategy interfaces changes, etc. What can I do also? I want to find solution when I should be keeping Strategy classes in codebase -- just for the sake that I don't need code change and re-deployment of the application if my Strategy changes, or I add new strategy. Please advise!

    Read the article

  • Rspec-rails doesn't seem to find my models

    - by sa125
    Hi - I'm trying out rspec, and immediately hit a wall when it doesn't seem to load db records I know exist. Here's my fairly simple spec (no tests yet). require File.expand_path(File.dirname(__FILE__) + '../spec_helper') describe SomeModel do before :each do @user1 = User.find(1) @user2 = User.find(2) end it "should do something fancy" end I get an ActiveRecord::RecordNotFound exception, saying it couldn't find User w/ ID=1 or ID=2, which I know for a fact exist. I set both test and development databases to point to the same schema in database.yml, so this shouldn't be database mixup. I also ran script/generate rspec after installing the gems (rspec, rspec-rails), and gem.config both environment.rb and test.rb. Any idea what I'm missing? thanks. EDIT Seems I was running the tests with rake spec:models, which emptied the db and thus no records were found. When I used % spec spec/models/some_model_spec.rb, everything worked as expected.

    Read the article

  • Defeating the RAID5 write hole with ZFS (but not RAID-Z) [closed]

    - by Michael Shick
    I'm setting up a long-term storage system for keeping personal backups and archives. I plan to have RAID5 starting with a relatively small array and adding devices over time to expand storage. I may also want to convert to RAID6 down the road when the array gets large. Linux md is a perfect fit for this use case since it allows both of the changes I want on a live array and performance isn't at all important. Low cost is also great. Now, I also want to defend against file corruption, so it looked like a RAID-Z1 would be a good fit, but evidently I would only be able to add additional RAID5 (RAID-Z1) sets at a time rather than individual drives. I want to be able to add drives one at a time, and I don't want to have to give up another device for parity with every expansion. So at this point, it looks like I'll be using a plain ZFS filesystem on top of an md RAID5 array. That brings me to my primary question: Will ZFS be able to correct or at least detect corruption resulting from the RAID5 write hole? Additionally, any other caveats or advice for such a set up is welcome. I'll probably be using Debian, but I'll definitely be using Linux since I'm familiar with it, so that means only as new a version of ZFS as is available for Linux (via ZFS-FUSE or so).

    Read the article

  • Eclipse plugins for Spring / Hibernate development?

    - by es11
    I have a running dynamic web project in Eclipse (Java EE + Maven + Spring). I am at the point where I need to integrate a persistence layer and want to use Hibernate with a mySql database. I am wonder what plugins would be useful for me at this point? For Hibernate should I install hibernate tools or is it not necessary? Are then any plugins that are most widely use for connecting / exploring database connections that would be appropriate for the type of project I am working on? Thanks.

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Windows 7 home backup solution, with offsite provision

    - by Richard E
    I am looking for a home backup solution for my single Windows 7 (Home Premium) PC. I have about 500GB of data to backup. I would like to spend less than GBP 300 on the solution. I don't see the need to backup the whole PC, rather specific folder branches (iTunes, photos, documents, Outlook files, user folders such as desktop, favorites etc). I would like a solution that enables me to maintain backups in two separate physical locations (e.g. home and work). To facilitate this I am imagining a storage unit with slots for two removable drives, along with three separate drives. At any one time two of the drives will be being backed up to in the storage unit. The third will be located at my work. Periodically I will take one of the drives into work and leave it there, then bring the drive that was there back home, and plug it into the storage unit. It will then be backed up along with the other drive that was left in the storage unit. This approach should cover scenarios such as virus attack and fire or theft from one location. Thoughts and comments on the sanity of this approach please...

    Read the article

  • Connect rails application to MsSQL 2005 from Windows

    - by Enrico Carlesso
    Hi guys. I (sadly) have to deploy a rails application on Windows XP which has to connect to Microsoft SQL Server 2005. Surfing in the web there are a lot of hits for connect from Linux to MsSQL, but cannot find out how to do it from Windows. Basically I followed these steps: Install dbi gem Install activerecord-sql-server-adapter gem My database.yml now looks like this: development: adapter: sqlserver mode: odbc dsn: test_dj host: HOSTNAME\SQLEXPRESS database: test_dj username: guest password: guest But I'm unable to connect it. When I run rake db:migrate I get IM002 (0) [Microsoft][ODBC Driver Manager] Data source name not found and no default driver specified I'm not a Windows user, so cannot understand really well the meaning of dsn element or so. Does someone have an idea how to solve this? Thanks in advance

    Read the article

  • Easiest way to retrofit retry logic on LINQ to SQL migration to SQL Azure

    - by Pat James
    I have a couple of existing ASP .NET web forms and MVC applications that currently use LINQ to SQL with a SQL Server 2008 Express database on a Windows VPS: one VPS for both IIS and SQL. I am starting to outgrow the VPS's ability to effectively host both SQL and IIS and am getting ready to split them up. I am considering migrating the database to SQL Azure and keeping IIS on the VPS. After doing initial research it sounds like implementing retry logic in the data access layer is a must-do when adopting SQL Azure. I suspect this is even more critical to implement in my situation where IIS will be on a VPS outside of the Azure infrastructure. I am looking for pointers on how to do this with the least effort and impact on my existing code base. Is there a good retry pattern that can be applied once at the LINQ to SQL data access layer, as opposed to having to wrap all of my LINQ to SQL operations in try/catch/wait/retry logic?

    Read the article

  • Invoking active_record error - can not load file in Ruby on Rails

    - by user1623624
    When I try to run rails generate scaffold test the following error always shows C:\Lab\railapps\dbtest>rails generate scaffold test invoke active_record C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/activesupport-3.2.1/lib/active_support/dependencies.rb:251:in `require': Please install the oracle_enhanced_adapter: `gem install activerecord-oracle_enhanced-adapter` (cannot load such file -- active_record/connection_adapters/oracle_enhanced_adapter) (LoadError) from C:/RailsInstaller/Ruby1.9.3/lib/ruby/gems/1.9.1/gems/activesupport-3.2.1/lib/active_support/dependencies.rb:251:in `block in require'" I did install gem oci8 then activerecord-oracle-enhanced-adapter. Can you help me by having a look? Thanks a lot. Version information C:\Lab\railapps\dbtest>gem list ruby-oci8 *** LOCAL GEMS *** ruby-oci8 (2.1.2 ruby x86-mingw32, 2.0.6) C:\Lab\railapps\dbtestgem list activerecord-oracle_enhanced-adapter *** LOCAL GEMS *** activerecord-oracle_enhanced-adapter (1.4.1) database.yml under configure development: adapter: oracle_enhanced database: cvrman.cablevision.com username: ruby password: ruby

    Read the article

  • AJAX call in a continuously loop?

    - by Mestika
    Hi, I want to create some kind of AJAX script or call that continuously will check a MySQL database if any new messages has arrived. When there is a new message in the database, the AJAX script should invoke a kind of alert box or message box. I’m not quite a AJAX expert (yet anyway) and have Googled around to find a solution but I’m having a hard time to figure out where to begin. I imagine that it is kind of the same method that an AJAX chat is using to see if any new chat-message has been send. I’ve also tried to search for AJAX (httpxmlrequest) call in a continuously and infinity loop but still haven’t got a solution yet. I hope there is someone, which can help me with such a AJAX script or maybe nudge me in the right direction. Thanks Sincerely Mestika

    Read the article

  • Seeking tuturial: introduction to ODBC with Delphi

    - by mawg
    I have a lot of embedded C/C++/Ada experience and an outdated smattering of Delphi plus some database stuff. Now I have to implement an app in Delphi which can manipulate MySql, Oracle, maybe MS Acess. In short, I need ODBC. I need to programatically created a database, define its structure and populate its contents, then later query its existence and programatically search. I would prefer not to use 3rd party components, unless there is a compelling reason to do so (performance ought not to be an issue for the app, it won't have much data or be run often, at least not in v1.0) . Can anyone point me at a tutorial which can get me up to speed? Thanks

    Read the article

  • Best way to optimize queries like this in Django

    - by chris
    I am trying to lower the amount of queries that my django app is using, but I am a little confused on how to do it. I would like to get a query set with one hit to the database and then filter items from that set. I have tried a couple of things, but I always get queries for each set. let's say I want to get all names from my DB, but also separate out the people just named Ted. Both the names and the ted set will be used in the template. This will give me two sets, one with all names and one with Ted.. but also hits the database twice: namelist = People.objects.all() tedList = namelist.filter(name='ted') Is there a way to filter the first set without hitting the data base again?

    Read the article

  • What is the fastest way to resize a large partition?

    - by Jook
    Due to a new HDD-Configuration I am currently handling larger backup/resize tasks with partitions between around 900MB, wich are 70-90% full. some background: First thing I've noticed was, that the Acronis-WesternDigital TrueImage was extremly slow while running it under Windows 7, even though on high priority. To create a normal backup for 650gb of data (900gb partition), it would have taken 3 days! The same task done with the boot-cd version of this acronis version took about 2 hours (SATA3 copy from one disk to another, both around 110MB/s). Now, after I have done all my backups, I've wanted to remove some obsolete partitions and resize the leftovers to full hdd size. Of course, usually this takes quite some time - in this case for this 900gb partition, to extend it to 931 (30gb+ from front, 1gb+ from end), it will take around 6 hours (using gparted)! Had I new that erlier, I would have just restored the image. But no - first it showed a reasonable time of 1:45h and 0 of 1 operations, but after finishing 1:45h it started again, only this time with 4h to go, still 0 of 1 operations, but now it was copying instead of moving. Question: However, why has it to be this slow to resize a partition? I am asking for a good explanaition. This has bugged me, since I started partitioning - why does it require to copy all the data around, can't it just stay in place?!

    Read the article

  • CodeIgniter's XSS Protection is removing <script> tags from user inputs... but I don't want it to!

    - by Jack W-H
    Hey folks, CodeIgniter is brilliant but I'm using it to develop a site where users need to be able to share their code for websites. Unfortunately, CodeIgniter has been doing the "right" thing by removing <script> tags from my user's inputs into the database, so when it's returned data looks like this: [removed] User's data [removed] However, I need my site to DISPLAY script tags but obviously not PARSE them. How can I get CodeIgniter or PHP to return <script> tags, but still sanitise them for the database and return them without them executing? Thanks! Jack EDIT: By the way, it's not an option to use stuff like Markdown, everything has to output to copy-pastable code that could work with no modification somewhere else

    Read the article

  • SQL Server 2008 Management Studio doesn't recognize new Schema

    - by Lieven Cardoen
    I have created a new Schema in a database called Contexts. Now when I want to write a query, Management Studio doesn't recognize the tables that belong to the new Schema. It says: 'Invalid object name Contexts.ContextLibraries'... Transact-SQL: INSERT INTO [Contexts].[ContextLibraries] (ChannelId, [IsSystem]) VALUES (@ChannelId, 1) When I try the same thing on my local database, it does work... Any ideas? I did try to change the Default schema for the user from dbo to Contexts but this doesn't work. Also checked Contexts in Schemas owned by this user without success. Update: Apparently the sql query does work but the editor gives a fault saying the object is invalid.

    Read the article

  • Sql query problem

    - by LiveEn
    I have the below sql query that will update the the values from a form to the database $sql="update leads set category='$Category',type='$stype',contactName='$ContactName',email='$Email',phone='$Phone',altphone='$PhoneAlt',mobile='$Mobile',fax='$Fax',address='$Address',city='$City',country='$Country',DateEdited='$today',printed='$Printed',remarks='$Remarks' where id='$id'"; $result=mysql_query($sql) or die(mysql_error()); echo '<h1>Successfully Updated!!.</h1>'; when i submit I dont get any errors and the success message is displayed but the database isnt updated . When i echo the $sql, all the values are set properly. and when i ech the $result i get the value 1. can someone please tell me what am i doing wrong here??

    Read the article

  • Help with Linked Server Error

    - by Randy Minder
    In SSMS 2008, I am trying to execute a stored procedure in a database on another server. The call looks something like the following: EXEC [RemoteServer].Database.Schema.StoredProcedureName @param1, @param2 The linked server is set up correctly, and has both RPC and RPC OUT set to true. Security on the linked server is set to Be made using the login's current security context. When I attempt to execute the stored procedure, I get the following error: Msg 18483, Level 14, State 1, Line 1 Could not connect to server 'RemoteServer' because '' is not defined as a remote login at the server. Verify that you have specified the correct login name. I am connected to the local server using Windows Authentication. Anyone know why I would be getting this error?

    Read the article

  • Windows Azure Platform, latest version?

    - by Vimvq1987
    I searched through internet but found nothing. The whitepapers of Windows Azure Platform say something like that: In its first release, the maximum size of a single database in SQL Azure Database is 10 gigabytes A few things are omitted in the technology’s first release, however, such as the SQL Common Language Runtime (CLR) and support for spatial data. (Microsoft says that both will be available in a future version.) I want to know that Microsoft had updated Windows Azure Platform and removed these limits or not? I decided to post this question here instead of Serverfault.com because it's more relative to programming than administration. Thank you

    Read the article

  • Sharepoint Foundation 2010 development single machine installation problems

    - by Robert Koritnik
    I'm having problems installing development machine for Sharepoint (Foundation) 2010. This is what I did so far on the same machine: Installed a clean Windows 7 x64 with 4GB of RAM without being part of any domain. Just a simple standalone machine. Enabled IIS related features as described here except IIS6 related ones (two of them) Installed SQL Server 2008 R2 Development Edition (DB Engine and Writer being enabled but not SQL Agent) Installed Visual Studio 2010 Premium Started installing Sharepoint Foundation 2010 with first extracting files, changing config to enable Windows 7 installation and then installed it as Server Farm (then Complete) to avoid installing SQL Express. Created a separate SPF_CONFIG local user with Logon on as a service right. Opened SPF Management Shell and run New-SPConfigurationDatabase so I am able to use a non-domain username (SPF_CONFIG that I created in the previous step) But all I get is this: The outcome after this error is: Database Sharepoint2010Config is created User SPF_CONFIG is added to SQL Server and attached to this newly created database as dbowner and checking SQL server security logins this user has following rights: dbcreator securityadmin public

    Read the article

  • Cannot connect to *.dbf file through JDBC drivers

    - by leodali
    i'm trying to connect to *.dbf (dBase III) file on my Java application, running on a Windows Server 2003 system. I'm encountering this error and I cannot really understand the meaning (sources for OdbcJdbc.java seems to be unavailable): [Microsoft][ODBC dBase driver] '(unknown)' is not a valid path error Class.forName("sun.jdbc.odbc.JdbcOdbcDriver"); String database = "jdbc:odbc:DRIVER={Microsoft dBase Driver(*.dbf)};DBQ=D:\\dbNeri\\CARISTAT;"; Connection conn = DriverManager.getConnection(database); Statement s = conn.createStatement(); String selTable = "SELECT * FROM CARISTAT"; Does it exists a JDBC driver able to connect to dBase files or do I have to import external libraries to do the magic? Thanks in advance for your help!

    Read the article

  • How can I sort by a transformable attribute in an NSFetchedResultsController?

    - by Mike Laurence
    I'm using NSValueTransformers to encrypt attributes (strings, dates, etc.) in my Core Data model, but I'm pretty sure it's interfering with the sorting in my NSFetchedResultsController. Does anyone know if there's a way to get around this? I suppose it depends on how the sort is performed; if it's always only performed directly on the database, then I'm probably out of luck. If it sorts on the objects themselves, then perhaps there's a way to activate the transformation before the sort occurs. I'm guessing it's directly on the database, though, since the sort would be key in grabbing subsets of the collection, which is the main benefit of NSFetchedResultsController anyway.

    Read the article

< Previous Page | 591 592 593 594 595 596 597 598 599 600 601 602  | Next Page >