Search Results

Search found 58551 results on 2343 pages for 'spatial data'.

Page 599/2343 | < Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >

  • Java or php tree structure problem

    - by agazerboy
    Hi All, I have all my data in my database. It has following 4 columns id source_clust target_clust result_clust 1 7 72 649 2 9 572 650 3 649 454 651 4 32 650 435 This data is like tree structure. source_clust and target_clust generate target_clust. target_clust can be source_clust or target_clust to make a new target_clust. Is there any php function or class that I can use to generate tree structure for my data???? I see this MySql site they are doing exactly what I need but I couldn't find how to implement that query in my data. Thanks ! Edited Is there any way in java to do it? if we have same data in array ??

    Read the article

  • InnoDB Compression Improvements in MySQL 5.6

    - by Inaam Rana
    MySQL 5.6 comes with significant improvements for the compression support inside InnoDB. The enhancements that we'll talk about in this piece are also a good example of community contributions. The work on these was conceived, implemented and contributed by the engineers at Facebook. Before we plunge into the details let us familiarize ourselves with some of the key concepts surrounding InnoDB compression. In InnoDB compressed pages are fixed size. Supported sizes are 1, 2, 4, 8 and 16K. The compressed page size is specified at table creation time. InnoDB uses zlib for compression. InnoDB buffer pool will attempt to cache compressed pages like normal pages. However, whenever a page is actively used by a transaction, we'll always have the uncompressed version of the page as well i.e.: we can have a page in the buffer pool in compressed only form or in a state where we have both the compressed page and uncompressed version but we'll never have a page in uncompressed only form. On-disk we'll always only have the compressed page. When both compressed and uncompressed images are present in the buffer pool they are always kept in sync i.e.: changes are applied to both atomically. Recompression happens when changes are made to the compressed data. In order to minimize recompressions InnoDB maintains a modification log within a compressed page. This is the extra space available in the page after compression and it is used to log modifications to the compressed data thus avoiding recompressions. DELETE (and ROLLBACK of DELETE) and purge can be performed without recompressing the page. This is because the delete-mark bit and the system fields DB_TRX_ID and DB_ROLL_PTR are stored in uncompressed format on the compressed page. A record can be purged by shuffling entries in the compressed page directory. This can also be useful for updates of indexed columns, because UPDATE of a key is mapped to INSERT+DELETE+purge. A compression failure happens when we attempt to recompress a page and it does not fit in the fixed size. In such case, we first try to reorganize the page and attempt to recompress and if that fails as well then we split the page into two and recompress both pages. Now lets talk about the three major improvements that we made in MySQL 5.6.Logging of Compressed Page Images:InnoDB used to log entire compressed data on the page to the redo logs when recompression happens. This was an extra safety measure to guard against the rare case where an attempt is made to do recovery using a different zlib version from the one that was used before the crash. Because recovery is a page level operation in InnoDB we have to be sure that all recompress attempts must succeed without causing a btree page split. However, writing entire compressed data images to the redo log files not only makes the operation heavy duty but can also adversely affect flushing activity. This happens because redo space is used in a circular fashion and when we generate much more than normal redo we fill up the space much more quickly and in order to reuse the redo space we have to flush the corresponding dirty pages from the buffer pool.Starting with MySQL 5.6 a new global configuration parameter innodb_log_compressed_pages. The default value is true which is same as the current behavior. If you are sure that you are not going to attempt to recover from a crash using a different version of zlib then you should set this parameter to false. This is a dynamic parameter.Compression Level:You can now set the compression level that zlib should choose to compress the data. The global parameter is innodb_compression_level - the default value is 6 (the zlib default) and allowed values are 1 to 9. Again the parameter is dynamic i.e.: you can change it on the fly.Dynamic Padding to Reduce Compression Failures:Compression failures are expensive in terms of CPU. We go through the hoops of recompress, failure, reorganize, recompress, failure and finally page split. At the same time, how often we encounter compression failure depends largely on the compressibility of the data. In MySQL 5.6, courtesy of Facebook engineers, we have an adaptive algorithm based on per-index statistics that we gather about compression operations. The idea is that if a certain index/table is experiencing too many compression failures then we should try to pack the 16K uncompressed version of the page less densely i.e.: we let some space in the 16K page go unused in an attempt that the recompression won't end up in a failure. In other words, we dynamically keep adding 'pad' to the 16K page till we get compression failures within an agreeable range. It works the other way as well, that is we'll keep removing the pad if failure rate is fairly low. To tune the padding effort two configuration variables are exposed. innodb_compression_failure_threshold_pct: default 5, range 0 - 100,dynamic, implies the percentage of compress ops to fail before we start using to padding. Value 0 has a special meaning of disabling the padding. innodb_compression_pad_pct_max: default 50, range 0 - 75, dynamic, the  maximum percentage of uncompressed data page that can be reserved as pad.

    Read the article

  • HTML: upload-form in an other form

    - by chris
    hi! I have a little problem with an upload-form within an other form (call it data-form). I know it is not possible to put a form into an other. So I would need to put it after my data-form. But I need the upload-form controls in the middle of my data-form because of optical and structural reasons. The file-upload should also perform other actions and not the same than the data-form. So any idea how can I make the upload-form after my data-form but visible in it or any other ideas to handle this? I am using javascirpt and php also. thanks and best wishes for 2011! br,chris

    Read the article

  • Access Services in SharePoint Server 2010

    - by Wayne
    Another SharePoint Server 2010 feature which cannot go unnoticed is the Access Services. Access Services is a service in SharePoint Server 2010 that allows administrators to view, edit, and configure a Microsoft access application within a Web Browser. Access Services settings support backup and recovery, regardless of whether there is a UI setting in Central Administration. However, backup and recovery only apply to service-level and administrative-level settings; end-user content from the Access application is not backed up as part of this process. Access Services has Windows PowerShell functionality that can be used to provide the service that uses settings from a previous backup; configure and manage macro and query setting; manage and configure session management; and configure all the global settings of the service. Key Benefits of SharePoint Server Access Services Easier Access to right tools: The enhanced, customizable Ribbon in Access 2010 makes it easy to uncover more commands so you can focus on the end product. The new Microsoft Office BackstageTM view is yet another feature that can help you easily analyze and document your database, share, publish, and customize your Access 2010 experience, all from one convenient location. Helps build database effortlessly and quickly: Out-of-the box templates and reusable components make Access Services the fastest, simplest database solution available. It helps find new pre-built templates which you can start using without customization or select templates created by your peers in the Access online community and customize them to meet your needs. It builds your databases with new modular components. New Application Parts enable you to add a set of common Access components, such as a table and form for task management, to your database in a few simple clicks. Database navigation is now simplified. It creates Navigation Forms and makes your frequently used forms and reports more accessible without writing any code or logic. Create Impactful forms and reports: Whether it's an inventory of your assets or customer sales database, Access 2010 brings the innovative tools you'd expect from Microsoft Office. Access Services easily spot trends and add emphasis to your data. It quickly create coordinating database forms and reports and bring the Web into your database. Obtain a centralized landing pad for your data: Access 2010 offers easy ways to bring your data together and help increase work quality. New technologies help break down barriers so you can share and work together on your databases, making you or your team more efficient and productive. Add automation and complex expressions: If you need a more robust database design, such as preventing record deletion if a specific condition is met or if you need to create calculations to forecast your budget, Access 2010 empowers you to be your own developer. The enhanced Expression Builder greatly simplifies your expression building experience with IntelliSense®. With the revamped Macro Designer, it's now even easier for you to add basic logic to your database. New Data Macros allow you to attach logic to your data, centralizing the logic on the table, not the objects that update your data. Key features of Access Services 2010 - Access database content through a Web browser: Newly added Access Services on Microsoft SharePoint Server 2010 enables you to make your databases available on the Web with new Web databases. Users without an Access client can open Web forms and reports via a browser and changes are automatically synchronized. - Simplify how you access the features you need: The Ribbon, improved in Access 2010, helps you access commands even more quickly by enabling you to customize or create your own tabs. The new Microsoft Office Backstage view replaces the traditional File menu to provide one central, organized location for all of your document management tasks. - Codeless navigation: Use professional looking web-like navigation forms to make frequently used forms and reports more accessible without writing any code or logic. - Easily reuse Access items in other databases: Use Application Parts to add pre-built Access components for common tasks to your database in a few simple clicks. You can also package common database components, such as data entry forms and reports for task management, and reuse them across your organization or other databases. - Simplified formatting: By using Office themes you can create coordinating professional forms and reports across your database. Simply select a familiar and great looking Office theme, or design your own, and apply it to your database. Newly created Access objects will automatically match your chosen theme.

    Read the article

  • iPhone Options for reading item from XML?

    - by fuzzygoat
    I am accessing this data from a web server using NSURL, what I am trying to decide is should I read this as XML or should I just use NSScanner and rip out the [data] bit I need. I have looked around the web for examples of extracting fields from XML on the iPhone but it all seems a bit overkill for what I need. Can anyone make any suggestions or point me in the right direction. In an ideal world I would really like to just specify [data] and get a string back "2046 3433 5674 3422 4456 8990 1200 5284" <!DOCTYPE tubinerotationdata> <turbine version="1.0"> <status version="1.0" result="200">OK</status> <data version="1.0"> 2046 3433 5674 3422 4456 8990 1200 5284 </data> </turbine> any comments / ideas are much appreciated. gary

    Read the article

  • Mathematica - Import CSV and process columns?

    - by Casey
    I have a CSV file that is formatted like: 0.0023709,8.5752e-007,4.847e-008 and I would like to import it into Mathematica and then have each column separated into a list so I can do some math on the selected column. I know I can import the data with: Import["data.csv"] then I can separate the columns with this: StringSplit[data[[1, 1]], ","] which gives: {"0.0023709", "8.5752e-007", "4.847e-008"} The problem now is that I don't know how to get the data into individual lists and also Mathematica does not accept scientific notation in the form 8.5e-007. Any help in how to break the data into columns and format the scientific notation would be great. Thanks in advance.

    Read the article

  • JTable.setRowHeight prevents me from adding more rows

    - by Brent Parker
    I'm working on a pretty simple Java app in order to learn more about JTables, TableModels, and custom cell renderers. The table is a simple table with 8 columns only with text in them. When you click on an "add" button, a dialog pops up and lets you enter the data for the columns. Now to my problem. One of the columns (the last one) should allow for multiple lines of text. I'm already putting HTML into the field, but it is not wrapping. I did some research and looked into JTable#setRowHeight(). However, once I use setRowHeight, I can no longer add rows to the table. The data is put into the table model, but it does not show in the table. If I remove the setRowHeight line, then it adds data just fine. Is there another step to adding data to my data model that I'm missing? Thanks a lot!

    Read the article

  • Interaction between Java and Android

    - by Grasper
    I am currently trying to research how to use Android with an existing java based system. Basically, I need to communicate to/from an Android application. The system currently passes object data from computer to computer using ActiveMQ as the JMS provider. On one of the computers is a display which shows object data to the user. What we want to do now is use a phone (running Android) as another option to show this object data to a user with wifi/network access. Ideally we would like to have a native application on the Android that would listen to the ActiveMQ topic and publish to another Topic and read/write/display the object data, but from some research I have done, I am not sure if this is possible. What are some other ways to approach this problem? The android Phone needs to be able to send/receive data. I have been using the AndroidEmulator for testing.

    Read the article

  • What happens to date-times and booleans when using DbLinq with SQLite?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm imaging a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not so bad. If you have any experience with this or a similar scenario, what are your "lessons learned"?

    Read the article

  • How the kernel gives seg. fault for a scenario like this?

    - by bala1486
    I have a doubt in accessing some invalid data. How will the OS cause segmentation fault for a scenario like this? Suppose a date segment has some 100 bytes. This will be mapped and a page table entry will be created. But the page size is 4K. Consider the data segment is aligned with this page boundary. So at first consider accessing a valid data within the 100 bytes. So now the page table entry is in TLB. Next if you try to access some invalid data between the 100 and 4K, the entry is there in page table and will it be allowed to access the invalid data??? Thanks, Bala

    Read the article

  • Importing an Excel WorkSheet into a Datatable

    - by Nick LaMarca
    I have been asked to create import functionality in my application. I am getting an excel worksheet as input. The worksheet has column headers followed by data. The users want to simply select an xls file from their system, click upload and the tool deletes the table in the database and adds this new data. I thought the best way would be too bring the data into a datatable object and do a foeach for every row in the datatable insert row by row into the db. My question is what can anyone give me code to open an excel file, know what line the data starts on in the file, and import the data into a datable object?

    Read the article

  • Is OO design's strength in semantics or encapsulation?

    - by Phil H
    Object-oriented design (OOD) combines data and its methods. This, as far as I can see, achieves two great things: it provides encapsulation (so I don't care what data there is, only how I get values I want) and semantics (it relates the data together with names, and its methods consistently use the data as originally intended). So where does OOD's strength lie? In constrast, functional programming attributes the richness to the verbs rather than the nouns, and so both encapsulation and semantics are provided by the methods rather than the data structures. I work with a system that is on the functional end of the spectrum, and continually long for the semantics and encapsulation of OO. But I can see that OO's encapsulation can be a barrier to flexible extension of an object. So at the moment, I can see the semantics as a greater strength. Or is encapsulation the key to all worthwhile code?

    Read the article

  • How to retain headers for all the pages of an exported pdf in php?

    - by udaya
    Hi I am exporting data from php page to pdf when the datas exceeed the page limit the header is not available for the consecutive pages function where i call the export to pdf is function changeDetails() { $bType = $this-input-post('textvalue'); if($bType == "pdf") { $this->load->library('table'); $this->load->plugin('to_pdf'); $data['countrytoword'] = $this->AddEditmodel1->export(); $this->table->set_heading('Country','State','Town','Name'); $out = $this->table->generate($data['countrytoword']); $html = $this->load->view( 'newpdf',$data, true); pdf_create($html, $cur_date); } } This is my view page from which i export data to pdf Name Country State Town Here I am getting the result as page:1 Name country State Town udaya india Tamilnadu kovai chandru srilanka columbo aaaaa page:2 vivek england gggkj gjgjkj in the page 2 i dont get the headers name, country ,state and town

    Read the article

  • SSIS web service task parsing result.

    - by dbengals
    I have an ssis (2005) package that uses the web service task to download to a file destination. The file contains a string of xml data. After downloaded the file looks like this. <?xml version="1.0" encoding="utf-16"?> <string>--here is XML data with escaped characters--</string> My thought was I could then use the XML source data flow source to pull the <string> data, but when I set this up the XML source will not read the <string> as a column. It will generate an xsd and it seems normal, but no luck seeing the column. Any ideas on getting this to work? Or would there be a better way to pull the data within the file generated from the web service? Thanks.

    Read the article

  • Issue with JSON and jQuery

    - by Jason N. Gaylord
    I'm calling a web service and returning the following data in JSON format: ["OrderNumber":"12345","CustomerId":"555"] In my web service success method, I'm trying to parse both: $.ajax({ type: "POST", url: "MyService.asmx/ServiceName", data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: function(msg) { var data = msg.d; var rtn = ""; $.each(data, function(list) { rtn = rtn + this.OrderNumber + ", " + this.CustomerId + "<br/>"; } rtn = rtn + "<br/>" + data; $("#test").html(rtn); } }); but I'm getting a bunch of "undefined, undefined" rows followed by the correct JSON string. Any idea why? I've tried using the eval() method but that didn't help as I got some error message talking about ']' being expected.

    Read the article

  • Updating a TableView with a WebService and Saving to CoreData

    - by jcady
    I am working on a project where I have a table view that is currently updated via a web request that returns XML. I implemented -(int)numberOfRowsInTableView:(NSTableView*)tv and -(id)tableView:(NSTableView *)tv objectValueForTableColumn:(NSTableColumn*)tableColumn row:(int)row in my XML parsing class, and have the table updated with the data that is pulled down from the server. I want to save the data that is pulled down using Core Data, so that the table can be saved/loaded. Then later on application start when the web request is made, it will only add data that is not already present. (The XML is sorted by release date, so later I will check to see which release dates are not loaded up from the Core Data store, and only load newer entries.) How would I go about implementing this? I am a very new Cocoa developer, but have gone through the entire Hillegass book. Thanks so much.

    Read the article

  • Connecting to multiple firebird Databases via Delphi

    - by Branden
    I am integrating a system with 2 other applications, 1 using a Firebird database whilst the other BIS (using ADO). My delphi application uses Firebird. I need to read data from my database, insert it into both the BIS database and the other application firebird database. I have created seperate data modules for each. Sending data to the ADO works fine, but when writing to the other Firebird DB (my db still open) I get strange errors. I have managed to isolate the problem to the second firebird DB. Small data writes seems fine. The data structures are completly different, so un able to use a synch tool. is there a way to overcome this by using multi threading or seperate memory space each Firebird instance uses?

    Read the article

  • Would an ORM have any way of determining that a SQLite column contains date-times or booleans?

    - by DanM
    I've been thinking about using SQLite for my next project, but I'm concerned that it seems to lack proper datetime and bit data types. If I use DbLinq (or some other ORM) to generate C# classes, will the data types of the properties be "dumbed down"? Will date-time data be placed in properties of type string or double? Will boolean data be placed in properties of type int? If yes, what are the implications? I'm envisioning a scenario where I need to write a whole second layer of classes with more specific data types and do a bunch of transformations and casts, but maybe it's not as bad as I fear. If you have any experience with this or a similar scenario, how did you handle it?

    Read the article

  • Reused UIWebView showing previous loaded content for a brief second on iPhone

    - by Roi
    In one of my apps I reuse a webview. Each time the user enters a certain view on reload cached data to the webview using the method - (void)loadData:(NSData *)data MIMEType:(NSString *)MIMEType textEncodingName:(NSString *)encodingName baseURL:(NSURL *)baseURL and I wait for the callback call - (void) webViewDidFinishLoad:(UIWebView *)webView. In the mean time I hide the webview and show a 'loading' label. Only when I receive webViewDidFinishLoad do I show the webview. Many times what happens is I see the previous data that was loaded to the webview for a brief second before the new data I loaded kicks in. I already added a delay of 0.2 seconds before showing the webview but it didn't help. Instead of solving this by adding more time to the delay does anyone know how to solve this issue or maybe clear old data from a webview without release and allocating it every time?

    Read the article

  • Abort a slow flush to disk after write?

    - by Therealstubot
    Is there a way to abort a python write operation in such a way that the OS doesn't feel it's necessary to flush the unwritten data to the disc? I'm writing data to a USB device, typically many megabytes. I'm using 4096 bytes as my block size on the write, but it appears that Linux caches up a bunch of data early on, and write it out to the USB device slowly. If at some point during the write, my user decides to cancel, I want the app to just stop writing immediately. I can see that there's a delay between when the data stops flowing from the application, and the USB activity light stops blinking. Several seconds, up to about 10 seconds typically. I find that the app is holding in the close() method, I'm assuming, waiting for the OS to finish writing the buffered data. I call flush() after every write, but that doesn't appear to have any impact on the delay. I've scoured the python docs for an answer but have found nothing.

    Read the article

  • Android serialization: ImageView

    - by embo
    I have a simple class: public class Ball2 extends ImageView implements Serializable { public Ball2(Context context) { super(context); } } Serialization ok: private void saveState() throws IOException { ObjectOutputStream oos = new ObjectOutputStream(openFileOutput("data", MODE_PRIVATE)); try { Ball2 data = new Ball2(Game2.this); oos.writeObject(data); oos.flush(); } catch (Exception e) { Log.e("write error", e.getMessage(), e); } finally { oos.close(); } } But deserealization private void loadState() throws IOException { ObjectInputStream ois = new ObjectInputStream(openFileInput("data")); try { Ball2 data = (Ball2) ois.readObject(); } catch (Exception e) { Log.e("read error", e.getMessage(), e); } finally { ois.close(); } } fail with error: 03-24 21:52:43.305: ERROR/read error(1948): java.io.InvalidClassException: android.widget.ImageView; IllegalAccessException How deserialize object correctly?

    Read the article

  • Need help with SQL table structure transformation

    - by Arnis L.
    I need to perform update/insert simultaneously changing structure of incoming data. Think about Shops that have defined work time for each day of the week. Hopefully, this might explain better what I'm trying to achieve: worktimeOrigin table: columns: shop_id day val data: 123 | "monday" | "9:00 AM - 18:00" 123 | "tuesday" | "9:00 AM - 18:00" 123 | "wednesday" | "9:00 AM - 18:00" shop table: columns: id worktimeDestination.id worktimeDestination table: columns: id monday tuesday wednesday My aim: I would like to insert data from worktimeOrigin table into worktimeDestination and specify appropriate worktimeDestination for shop. shop table data: 123 1 (updated) worktimeDestination table data: 1 | "9:00 AM - 18:00" | "9:00 AM - 18:00" | "9:00 AM - 18:00" (inserted) Any ideas how to do that?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • How can I populate highchart jQuery plugin dynamically from MVC action?

    - by Anders Svensson
    I'm trying out the Highcharts jQuery plugin for creating charts of data in an MVC application. But I need to get the data for the function dynamically from an Action Method. How can I do that? Taking the example from the Highcharts site (http://highcharts.com/documentation/how-to-use): var chart1; // globally available $(document).ready(function() { chart1 = new Highcharts.Chart({ chart: { renderTo: 'chart-container-1', defaultSeriesType: 'bar' }, title: { text: 'Fruit Consumption' }, xAxis: { categories: ['Apples', 'Bananas', 'Oranges'] }, yAxis: { title: { text: 'Fruit eaten' } }, series: [{ name: 'Jane', data: [1, 0, 4] }, { name: 'John', data: [5, 7, 3] }] }); }); How can I get the data in there dynamically from the action method? Someone suggested I might use JSon, but couldn't specify how. If this is the case, I would really appreciate a simple and specific example, because I don't know much about JSon. Any help appreciated!

    Read the article

  • how to remove a few lines from a Unicode registry file using batch commands in Windows?

    - by Cosmin
    Hi. I have a program who's generating some data in registry. I save it with "reg export HKCU\Software\ProgramName\Data data.reg" (Unicode format). I need to take it to other computer and import it there so the program from that computer could use the data. But I have to remove some text lines from data.reg. The text lines are easy to find because they contain some strings. Now I'm doing this manually (using Wordpad) every few days but maybe there is another way... Oh and I can't install other programs on these computers (the access is restricted) so I have to use batch/cmd files. What I tried so far: - redirecting the export to "con" but is visual only not in a variable; - using "for /F ..." but this works only with ANSI and removes blank lines. Can somebody please help me...? Thank you.

    Read the article

< Previous Page | 595 596 597 598 599 600 601 602 603 604 605 606  | Next Page >