Search Results

Search found 77599 results on 3104 pages for 'test data'.

Page 786/3104 | < Previous Page | 782 783 784 785 786 787 788 789 790 791 792 793  | Next Page >

  • Updating a TableView with a WebService and Saving to CoreData

    - by jcady
    I am working on a project where I have a table view that is currently updated via a web request that returns XML. I implemented -(int)numberOfRowsInTableView:(NSTableView*)tv and -(id)tableView:(NSTableView *)tv objectValueForTableColumn:(NSTableColumn*)tableColumn row:(int)row in my XML parsing class, and have the table updated with the data that is pulled down from the server. I want to save the data that is pulled down using Core Data, so that the table can be saved/loaded. Then later on application start when the web request is made, it will only add data that is not already present. (The XML is sorted by release date, so later I will check to see which release dates are not loaded up from the Core Data store, and only load newer entries.) How would I go about implementing this? I am a very new Cocoa developer, but have gone through the entire Hillegass book. Thanks so much.

    Read the article

  • Is OO design's strength in semantics or encapsulation?

    - by Phil H
    Object-oriented design (OOD) combines data and its methods. This, as far as I can see, achieves two great things: it provides encapsulation (so I don't care what data there is, only how I get values I want) and semantics (it relates the data together with names, and its methods consistently use the data as originally intended). So where does OOD's strength lie? In constrast, functional programming attributes the richness to the verbs rather than the nouns, and so both encapsulation and semantics are provided by the methods rather than the data structures. I work with a system that is on the functional end of the spectrum, and continually long for the semantics and encapsulation of OO. But I can see that OO's encapsulation can be a barrier to flexible extension of an object. So at the moment, I can see the semantics as a greater strength. Or is encapsulation the key to all worthwhile code?

    Read the article

  • JPanel Layout Image Cutoff

    - by Trizicus
    I am adding images to a JPanel but the images are getting cut off. I was originally trying BorderLayout but that only worked for one image and adding others added image cut-off. So I switched to other layouts and the best and closest I could get was BoxLayout however that adds a very large cut-off which is not acceptable either. So basically; How can I add images (from a custom JComponent) to a custom JPanel without bad effects such as the one present in the code. Custom JPanel: import java.awt.Color; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.event.ActionEvent; import java.awt.event.ActionListener; import java.awt.event.MouseEvent; import java.awt.event.MouseListener; import javax.swing.BoxLayout; import javax.swing.JPanel; import javax.swing.Timer; public class GraphicsPanel extends JPanel implements MouseListener { private Entity test; private Timer timer; private long startTime = 0; private int numFrames = 0; private float fps = 0.0f; GraphicsPanel() { test = new Entity("test.png"); Thread t1 = new Thread(test); t1.start(); Entity ent2 = new Entity("images.jpg"); ent2.setX(150); ent2.setY(150); Thread t2 = new Thread(ent2); t2.start(); Entity ent3 = new Entity("test.png"); ent3.setX(0); ent3.setY(150); Thread t3 = new Thread(ent3); t3.start(); //ESSENTIAL setLayout(new BoxLayout(this, BoxLayout.X_AXIS)); add(test); add(ent2); add(ent3); //GAMELOOP timer = new Timer(30, new Gameloop(this)); timer.start(); addMouseListener(this); } @Override public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2 = (Graphics2D) g.create(); g2.setClip(0, 0, getWidth(), getHeight()); g2.setColor(Color.BLACK); g2.drawString("FPS: " + fps, 1, 15); } public void getFPS() { ++numFrames; if (startTime == 0) { startTime = System.currentTimeMillis(); } else { long currentTime = System.currentTimeMillis(); long delta = (currentTime - startTime); if (delta > 1000) { fps = (numFrames * 1000) / delta; numFrames = 0; startTime = currentTime; } } } public void mouseClicked(MouseEvent e) {} public void mousePressed(MouseEvent e) {} public void mouseReleased(MouseEvent e) {} public void mouseEntered(MouseEvent e) { } public void mouseExited(MouseEvent e) { } class Gameloop implements ActionListener { private GraphicsPanel gp; Gameloop(GraphicsPanel gp) { this.gp = gp; } public void actionPerformed(ActionEvent e) { try { gp.getFPS(); gp.repaint(); } catch (Exception ez) { } } } } Main class: import java.awt.EventQueue; import javax.swing.JFrame; public class MainWindow { public static void main(String[] args) { new MainWindow(); } private JFrame frame; private GraphicsPanel gp = new GraphicsPanel(); MainWindow() { EventQueue.invokeLater(new Runnable() { public void run() { frame = new JFrame("Graphics Practice"); frame.setSize(680, 420); frame.setVisible(true); frame.setDefaultCloseOperation(JFrame.EXIT_ON_CLOSE); frame.add(gp); } }); } } Custom JComponent import java.awt.Color; import java.awt.Dimension; import java.awt.Graphics; import java.awt.Graphics2D; import java.awt.image.BufferedImage; import javax.imageio.ImageIO; import javax.swing.JComponent; public class Entity extends JComponent implements Runnable { private BufferedImage bImg; private int x = 0; private int y = 0; private int entityWidth, entityHeight; private String filename; Entity(String filename) { this.filename = filename; } public void run() { bImg = loadBImage(filename); entityWidth = bImg.getWidth(); entityHeight = bImg.getHeight(); setPreferredSize(new Dimension(entityWidth, entityHeight)); } @Override public void paintComponent(Graphics g) { super.paintComponent(g); Graphics2D g2d = (Graphics2D) g.create(); g2d.drawImage(bImg, x, y, null); g2d.dispose(); } public BufferedImage loadBImage(String filename) { try { bImg = ImageIO.read(getClass().getResource(filename)); } catch (Exception e) { } return bImg; } public int getEntityWidth() { return entityWidth; } public int getEntityHeight() { return entityHeight; } public int getX() { return x; } public int getY() { return y; } public void setX(int x) { this.x = x; } public void setY(int y) { this.y = y; } }

    Read the article

  • Importing an Excel WorkSheet into a Datatable

    - by Nick LaMarca
    I have been asked to create import functionality in my application. I am getting an excel worksheet as input. The worksheet has column headers followed by data. The users want to simply select an xls file from their system, click upload and the tool deletes the table in the database and adds this new data. I thought the best way would be too bring the data into a datatable object and do a foeach for every row in the datatable insert row by row into the db. My question is what can anyone give me code to open an excel file, know what line the data starts on in the file, and import the data into a datable object?

    Read the article

  • How do I get code coverage of Perl cgi script when executed by Selenium?

    - by Kurt W. Leucht
    I'm using Eclipse EPIC IDE to write some Perl cgi scripts which call some Perl modules that I have also written. The EPIC IDE lets me configure a Perl CGI "run configuration" which runs my CGI script. And then I've got Selenium set up and one of my unit test files runs some Selenium commands to run my cgi script through its paces. But the coverage report from Module::Build dispatch 'testcover' doesn't show that any of my module code has been executed. It's been executed by my cgi script, but I guess the CGI script was run manually and was not executed directly by my unit test file, so maybe that's why the coverage isn't being recognized. Is there a way to do this right so I can integrate Selenium and unit test files and code coverage all together somehow?

    Read the article

  • How to retain headers for all the pages of an exported pdf in php?

    - by udaya
    Hi I am exporting data from php page to pdf when the datas exceeed the page limit the header is not available for the consecutive pages function where i call the export to pdf is function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "pdf") { $this->load->library('table'); $this->load->plugin('to_pdf'); $data['countrytoword'] = $this->AddEditmodel1->export(); $this->table->set_heading('Country','State','Town','Name'); $out = $this->table->generate($data['countrytoword']); $html = $this->load->view( 'newpdf',$data, true); pdf_create($html, $cur_date); } } This is my view page from which i export data to pdf Name Country State Town Here I am getting the result as page:1 Name country State Town udaya india Tamilnadu kovai chandru srilanka columbo aaaaa page:2 vivek england gggkj gjgjkj in the page 2 i dont get the headers name, country ,state and town

    Read the article

  • Grails Testing hickups

    - by egervari
    I have two testing questions. Both are probably easily answered. The first is that I wrote this unit test in Grails: void testCount() { mockDomain(UserAccount) new UserAccount(firstName: "Ken").save() new UserAccount(firstName: "Bob").save() new UserAccount(firstName: "Dave").save() assertEquals(3, UserAccount.count()) } For some reason, I get 0 returned back. Did I forget to do something? The second question is for those who use IDEA. What should I be running - IDEA's junit tests, or grails targets? I have two options. Also, why does IDEA say that my tests pass and it provides a green light even though the test above actually fails? This will really drive me nuts if I have to check the test reports in html every time I run my tests..... Help?

    Read the article

  • How the kernel gives seg. fault for a scenario like this?

    - by bala1486
    I have a doubt in accessing some invalid data. How will the OS cause segmentation fault for a scenario like this? Suppose a date segment has some 100 bytes. This will be mapped and a page table entry will be created. But the page size is 4K. Consider the data segment is aligned with this page boundary. So at first consider accessing a valid data within the 100 bytes. So now the page table entry is in TLB. Next if you try to access some invalid data between the 100 and 4K, the entry is there in page table and will it be allowed to access the invalid data??? Thanks, Bala

    Read the article

  • Unbinding inline onClick not working in jQuery?

    - by Polaris878
    Okay so, I'm wondering how to unbind an inline onclick event in jQuery. You'd think .unbind() would work, however it doesn't. To test this for yourself, play around with the following HTML and JavaScript: function UnbindTest() { $("#unbindTest").unbind('click'); } function BindTest() { $("#unbindTest").bind('click', function() { alert("bound!"); }); } <button type="button" onclick="javascript:UnbindTest();">Unbind Test</button> <button type="button" onclick="javascript:BindTest();">Bind Test</button> <button type="button" onclick="javascript:alert('unbind me!');" id="unbindTest">Unbind Button</button> As you can see, unbinding does not unbind the inline onclick event... however it does unbind the click event added with bind(). So, I'm wondering if there is a way to unbind inline onclick events short of doing the following: $("#unbindTest").get(0).onclick = ""; Thanks

    Read the article

  • Paint java GUI component to image file

    - by Simon
    Let's say I have JButton test = new JButton("Test Button"); and I want to draw the button into an image object and save it to a file. I tried this: BufferedImage b = new BufferedImage(500, 500, BufferedImage.TYPE_INT_ARGB); test.paint(b.createGraphics()); File output = new File("C:\\screenie.png"); try { ImageIO.write(b, "png", output); } catch (IOException e) { // TODO Auto-generated catch block e.printStackTrace(); } This code produced an empty 500x500 PNG-file. Does anyone know how I can draw the GUI component to an image file?

    Read the article

  • Issue with JSON and jQuery

    - by Jason N. Gaylord
    I'm calling a web service and returning the following data in JSON format: ["OrderNumber":"12345","CustomerId":"555"] In my web service success method, I'm trying to parse both: $.ajax({ type: "POST", url: "MyService.asmx/ServiceName", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { var data = msg.d; var rtn = ""; $.each(data, function(list) { rtn = rtn + this.OrderNumber + ", " + this.CustomerId + "<br/>"; } rtn = rtn + "<br/>" + data; $("#test").html(rtn); } }); but I'm getting a bunch of "undefined, undefined" rows followed by the correct JSON string. Any idea why? I've tried using the eval() method but that didn't help as I got some error message talking about ']' being expected.

    Read the article

  • MongoMapper - undefined method `keys'

    - by nimnull
    I'm trying to create a Document instance with params passed from the post-submitted form: My Mongo mapped document looks like: class Good include MongoMapper::Document key :title, String key :cost, Float key :description, String timestamps! many :attributes validates_presence_of :title, :cost end And create action: def create @good = Good.new(params[:good]) if @good.save redirect_to @good else render :new end end params[:good] containes all valid document attributes - {"good"={"cost"="2.30", "title"="Test good", "description"="Test description"}}, but I've got a strange error from rails: undefined method `keys' for ["title", "Test good"]:Array My gem list: *** LOCAL GEMS *** actionmailer (2.3.8) actionpack (2.3.8) activerecord (2.3.8) activeresource (2.3.8) activesupport (2.3.8) authlogic (2.1.4) bson (1.0) bson_ext (1.0) compass (0.10.1) default_value_for (0.1.0) haml (3.0.6) jnunemaker-validatable (1.8.4) mongo (1.0) mongo_ext (0.19.3) mongo_mapper (0.7.6) plucky (0.1.1) rack (1.1.0) rails (2.3.8) rake (0.8.7) rubygems-update (1.3.7) Any suggestions how to fix this error?

    Read the article

  • Connecting to multiple firebird Databases via Delphi

    - by Branden
    I am integrating a system with 2 other applications, 1 using a Firebird database whilst the other BIS (using ADO). My delphi application uses Firebird. I need to read data from my database, insert it into both the BIS database and the other application firebird database. I have created seperate data modules for each. Sending data to the ADO works fine, but when writing to the other Firebird DB (my db still open) I get strange errors. I have managed to isolate the problem to the second firebird DB. Small data writes seems fine. The data structures are completly different, so un able to use a synch tool. is there a way to overcome this by using multi threading or seperate memory space each Firebird instance uses?

    Read the article

  • Accents in file name using Java on Solaris

    - by Stef
    I have a problem where I can't write files with accents in the file name on Solaris. Given following code public static void main(String[] args) { System.out.println("Charset = "+ Charset.defaultCharset().toString()); System.out.println("testéörtkuoë"); FileWriter fw = null; try { fw = new FileWriter("testéörtkuoë"); fw.write("testéörtkuoëéörtkuoë"); fw.close(); I get following output Charset = ISO-8859-1 test??rtkuo? and I get a file called "test??rtkuo?" Based on info I found on StackOverflow, I tried to call the Java app by adding "-Dfile.encoding=UTF-8" at startup. This returns following output Charset = UTF-8 testéörtkuoë But the filename is still "test??rtkuo?" Any help is much appreciated. Stef

    Read the article

  • Creating method templates in Eclipse

    - by stevebot
    Is there any way to do the following in eclipse? Have eclipse template a method like the following public void test(){ // CREATE MOCKS // CREATE EXPECTATIONS // REPLAY MOCKS // VERIFY MOCKS } so then I could presumably just use intellisense and select an option like "createtest" and have it stub out a method with the comments similar to the above?My problem is that often myself and other developers I know forgot all the steps we need to follow to do what we dub as a valid unit test for our application. If I could template our test methods to stub out the comments above it would be a big help.

    Read the article

  • Would an ORM have any way of determining that a SQLite column contains date-times or booleans?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • Weird Facebooker Plugin & Pushion Passenger ModRails Production Error

    - by Ranknoodle
    I have an application (Rails 2.3.5) that I'm deploying to production Linux/Apache server using the latest Phushion Passenger/Apache Module 2.2.11 version. After deploying my original application, it returns a 500 error with no logging to production log. So I created a minimal test rails application, with some active record calls to the database to print out a list of objects to the home controller/my index page. I also cleared out all plugins. That works fine in the production environment. Then I one by one introduced each plugin that I'm using one at a time. Every plugin works fine EXCEPT facebooker. Every time I load the facebooker plugin into my app/vendor/plugins directory (via script git etc) my test application break (500 error - no error logging). Everytime I remove the facebooker plugin my test application works. Has anyone seen this before/ have any solutions? I saw this solution but didn't see it in the facebooker code.

    Read the article

  • SSIS web service task parsing result.

    - by dbengals
    I have an ssis (2005) package that uses the web service task to download to a file destination. The file contains a string of xml data. After downloaded the file looks like this. <?xml version="1.0" encoding="utf-16"?> <string>--here is XML data with escaped characters--</string> My thought was I could then use the XML source data flow source to pull the <string> data, but when I set this up the XML source will not read the <string> as a column. It will generate an xsd and it seems normal, but no luck seeing the column. Any ideas on getting this to work? Or would there be a better way to pull the data within the file generated from the web service? Thanks.

    Read the article

  • Not able to open a file in php

    - by ehsanul
    The following code works when invoking through the command line with php -f test.php, from root. It does not work though when being invoked via apache when loading the php page. The code chokes at fopen() and the resulting web page just says "can't open file". <?php $fp = fopen("/path/to/some_file.txt","a") or die("can't open file"); fwrite($fp,"some text"); fclose($fp); ?> I tried to play with the file permissions, but to no avail. I changed the user/group with chown apache:apache test.php and changed permissions with chmod 755 test.php. Here is the relevant result of ls -l /path/to/some_file.txt: -rwxr-xr-x 1 apache apache 0 Apr 12 04:16 some_file.txt

    Read the article

  • Excluding files from being deployed with Capistrano while still under version control with Git

    - by Jimmy Cuadra
    I want to start testing the JavaScript in my Rails apps with qUnit and I'm wondering how to keep the test JavaScript and test runner HTML page under version control (I'm using Git, of course) but keep them off the production server when I deploy the app with Capistrano. My first thought is to let Capistrano send all the code over as usual including the test files, and write a task to delete them at the end of the deployment process. This seems like sort of a hack, though. Is there a cleaner way to tell Capistrano to ignore certain parts of the repository when deploying?

    Read the article

  • python read utf8 text file problem

    - by cpps
    I have a problem with python about reading and print utf8 text file. I have a test.txt in utf8 encoding without BOM. This file has two characters in it: ?? The first character "?" is Chinese and the second "?" is Japanese. Now, When I use Ulipad (a python editor) to run the following code to read the txt file, and print these two characters. import codecs infile = "C:\\test.txt" f = codecs.open(infile, "r", "utf-8") s = f.read() print(s) I got this error, "UnicodeEncodeError: 'cp950' codec can't encode character '\u58f0' in position 1: illegal multibyte sequence" I found it caused from the second character "?" . But when I use the same code to test in python default GUI IDLE, it works to print the two characters with no error. So, how can I fix the problem. My running environment is python 3.1 , windows xp traditional Chinese.

    Read the article

  • Parsing JSON file with Python -> google map api

    - by Hannes
    Hi all, I am trying to get started with JSON in Python, but it seems that I misunderstand something in the JSON concept. I followed the google api example, which works fine. But when I change the code to a lower level in the JSON response (as shown below, where I try to get access to the location), I get the following error message for code below: Traceback (most recent call last): File "geoCode.py", line 11, in test = json.dumps([s['location'] for s in jsonResponse['results']], indent=3) KeyError: 'location' How can I get access to lower information level in the JSON file in python? Do I have to go to a higher level and search the result string? That seems very weird to me? Here is the code I have tried to run: import urllib, json URL2 = "http://maps.googleapis.com/maps/api/geocode/json?address=1600+Amphitheatre+Parkway,+Mountain+View,+CA&sensor=false" googleResponse = urllib.urlopen(URL2); jsonResponse = json.loads(googleResponse.read()) test = json.dumps([s['location'] for s in jsonResponse['results']], indent=3) print test Thank you for your responses.

    Read the article

  • how do I automatically execute javascript?

    - by user317005
    how do I automatically execute javascript? I know of < body onLoad="" , but I just thought maybe there is another way to do it? html: <html><head></head><body><div id="test"></div></body></html> javascript: <script>(function(){var text = document.getElementById('test').innerHTML;var newtext = text.replace('', '');return newtext;})();</script> I wanna get the text within "test", replace certain parts, and then output it to the browser. Any ideas on how to do it? I'd appreciate any help. Thanks.

    Read the article

  • Zend_XmlRpc :Failed to parse response error

    - by davykiash
    Am trying to get a simple hello world XMLRPC server setup to work.However I get this Failed to parse response error error when I run the test URL http://localhost/client/index/ on my browser In my Rpc Controller that handles all my XMLRPC calls class RpcController extends Zend_Controller_Action { public function init() { $this->_helper->layout->disableLayout(); $this->_helper->viewRenderer->setNoRender(); } public function xmlrpcAction() { $server = new Zend_XmlRpc_Server(); $server->setClass('Service_Rpctest','test'); $server->handle(); } } In my client Controller that calls the XMLRPC Server class ClientController extends Zend_Controller_Action { public function indexAction() { $clientrpc = new Zend_XmlRpc_Client('http://localhost/rpc/xmlrpc/'); //Render Output to the view $this->view->rpcvalue = $clientrpc->call('test.sayHello'); } } In my Service_Rpctest function <?php class Service_Rpctest { /** * Return the Hello String * * @return string */ public function sayHello() { $value = 'Hello'; return $value; } } What am I missing?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

< Previous Page | 782 783 784 785 786 787 788 789 790 791 792 793  | Next Page >