Search Results

Search found 19554 results on 783 pages for 'xml pull parser'.

Page 625/783 | < Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >

  • How to delete characters and append strings?

    - by devin250
    i am adding a new record to xml file im first quering all existing items and storing the count in an int int number = query.count() and then incrementing number by 1; number = number +1; now i want to format this value in a string having "N00000000" format and the number will ocuppy the last positions Pseudo code: //declare the format string sting format = "N00000000" //calculate the length of number string int length =number.ToString().Length(); // delete as many characters from right to left as the length of number string ??? // finally concatenate both strings with + operator ??? help please

    Read the article

  • Why doesn't Microsoft release a 'proper' AJAX grid for ASP.Net

    - by Maxim Gershkovich
    Why doesn't Microsoft release a 'proper' AJAX grid for ASP.Net either as part of Visual Studio or the AJAX control toolkit? Has there been any discussion that anyone is aware of regarding this issue? Also does anyone have any open source suggestions for 'proper' AJAX gridviews? So far I have found one.... http://dotnetslackers.com/projects/AjaxDataControls/Default.aspx PS: By proper I mean a grid that actually uses XML responses rather than the nasty html javascript based injection that is the current nastyness of the gridview (EVEN IN VS 2010).

    Read the article

  • Error when trying to use hibernate annotations.

    - by Wilhelm
    The error I'm receiving is listed here. That's my HibernateUtil.java package com.rosejapan; import org.hibernate.SessionFactory; import org.hibernate.cfg.AnnotationConfiguration;; public class HibernateUtil { private static final SessionFactory sessionFactory; static { try { // Create the SessionFactory from hibernate.cfg.xml sessionFactory = new AnnotationConfiguration().configure().buildSessionFactory(); } catch(Throwable e) { System.err.println("Initial sessionFactory creation failed. " + e); throw new ExceptionInInitializerError(e); } } public static SessionFactory getSessionFactory() { return sessionFactory; } } Everything looks all right... I've already included log4j-boot.jar in the CLASSPATH, but didn't resolved my problem.

    Read the article

  • Bind multiple request parameters to one object in Spring 3

    - by Max
    Hi, I can't figure out a way to bind several arguments and headers to one request parameter using annotations in Spring 3. For example, let's say I'm getting this request: Headers: Content-type: text/plain; POST Body: Name: Max Now I want it all to mysteriously bind to this object: class NameInfo { String name; } Using some code like this: String getName() { if ("text/plain".equals(headers.get("content-type"))) { return body.get("name"); } else if ("xml".equals(headers.get("content-type")) { return parseXml(body).get("name"); } else ... } So that in the end I would be able to use: @RequestMapping(method = RequestMethod.POST) void processName(@RequestAttribute NameInfo name) { ... } Is there a way to achieve something similar to what I need? Thanks in advance.

    Read the article

  • PDF Generation Help needed

    - by Kobojunkie
    Hi, I am brandnew to PDF Generation or rendering but have a project to, create a PDF Template system that allows users to save Template to Database, and later generate a PDF document using the template and values from my database. Questions a) Is there a PDF tool out there that can help me with this and documentation I can study to learn of this? b) Are there free tools out there for this? c) How do I create a PDF Template? XML? Thanks in Advance!

    Read the article

  • How to apply a free third party CA and set up Tomcat SSL with it

    - by lenny
    These days I tried to apply a free third pary CA ( www.cacert.org & www.freeca.cn ) and then set up Tomcat SSL with the CA. My purpose is to eliminate the "Certificate Error" page when accessing https://... from a client browser. But it's a little hard for me to get around it. My steps to apply a free CA, from www.freeca.cn I used keytool to generate a cer file with command: keytool -genkey ... // Generate a key keytool -certreq ... // Generate a cert file and then I got some code from the cert file, and paste onto www.freeca.cn to generate a cer file. Then I imported the cer file keytool -import -alias abc -file MyABC.cer -keystore mykeystorefile.store And then I set up the mykeystorefile.store into tomcat /conf/server.xml, but it didn't work, sill pop "Certificate Error" page when trying to access https://.... Can someone help me? Thanks

    Read the article

  • Tree View Control problem with render Control function

    - by vikas
    I am using TreeView Control in Asp.net. I have placed this control inside a panel. The tree control is completely binded (we don’t want populate on demand) with an Xmldatasource during a callback and then I call Panel.renderControl to return the response (HTML) to the client side callback handler. Problem: 1. The tree expand/collapse (on click of plus sign) is causing postback whereas when I normally bind a tree with xml during postback and without using renderControl of container control, the expand/collapse (on click of plus sign) is being handled at client side.

    Read the article

  • How can I pass a raw System.Drawing.Image to an .ashx?

    - by Mike C
    I am developing an application that stores images as Base64 strings in xml files. I also want to allow the user to crop the image before saving it to the file, preferably all in memory without having to save a temp file, and then delete it afterwards. In order to display the newly uploaded image, I need to create a HTTP handler that I can bind the asp:Image to. The only examples for doing this online require passing the .ashx an ID and then pulling the image from a DB or other data store. Is it possible to somehow pass the raw data to the .ashx in order to get back the image? Thanks, Mike

    Read the article

  • Definitive best practice for injecting, manipulating AJAX data

    - by Nic
    Ever since my foray into AJAX, I've always used the "whatever works" method of manipulating AJAX data returns. I'd like to know what the definitive and modern best practice is for handling data. Is it best practice to generate the HTML via the server script and introduce the returned data on the onComplete function? Should XML/JSON be looked at first before anything? How about manipulating the returned data? Using .live() doesn't seem like it is the most efficient way. I've never seen a definitive answer to this question. Your expertise is much appreciated.

    Read the article

  • How can I concatinate a subquery result field into the parent query?

    - by Pure.Krome
    Hi folks, DB: Sql Server 2008. I have a really (fake) groovy query like this:- SELECT CarId, NumberPlate (SELECT Owner FROM Owners b WHERE b.CarId = a.CarId) AS Owners FROM Cars a ORDER BY NumberPlate And this is what I'm trying to get... => 1 ABC123 John, Jill, Jane => 2 XYZ123 Fred => 3 SOHOT Jon Skeet, ScottGu So, i tried using AS [Text()] ... FOR XML PATH('') but that was inlcuding weird encoded characters (eg. carriage return). ... so i'm not 100% happy with that. I also tried to see if there's a COALESCE solution, but all my attempts failed. So - any suggestions?

    Read the article

  • Haskell as REST server

    - by Dev er dev
    I would like to try Haskell on a smallish project which should be well suited to it. I would like to use it as a backend to a small ajax application. Haskell backend should be able to do authentication (basic, form, whatever, ...), keep track of user session (not much data there except for username) and to dispatch request to handlers based on uri and request type. It should also be able to serialize response to both xml and json format, depending on request parameter. I suppose the handlers are ideally suited for Haskell, since the service is basically stateless, but I don't know where to start for the rest of the story. Searching hackage didn't give me much hints.

    Read the article

  • Call Babel .Net Obfuscator from C# Code

    - by aron
    Hello, I have a C# WinForms app that I use to create software patches for my app. In noticed that Babel .Net includes Babel.Build.dll and Babel.Code.dll Is there a way, in my C# code, I can call Babel .Net something like: ExecuteCommand(C:\Program Files\Babel\babel.exe "C:\Patch\v6\Demo\Bin\Lib.dll" --rules "C:\Patch\babelRules.xml", 600) Here's a common script to execute a CMD prompt in C#. However if I just include the Babel.Build.dll in my winform I may be able to do it seamlessly. public static int ExecuteCommand(string Command, int Timeout) { int ExitCode; ProcessStartInfo ProcessInfo; Process Process; ProcessInfo = new ProcessStartInfo("cmd.exe", "/C " + Command); ProcessInfo.CreateNoWindow = true; ProcessInfo.UseShellExecute = false; Process = Process.Start(ProcessInfo); Process.WaitForExit(Timeout); ExitCode = Process.ExitCode; Process.Close(); return ExitCode; }

    Read the article

  • Ask Basic Configurator in Apache Commong Log

    - by adisembiring
    I use log4j as logger for my web application. in log4j, I can set the level log in log4j properties or log4j.xml. in log4j, we instance logger as follows: static Logger logger = Logger.getLogger(SomeClass.class); I init log4j basic configurator in a servlet file using init method. But, I usually test application using JUnit, So I init the basic configurator in setup method. after that, I test the application, and I can see the log. Because I deployed, the web in websphere. I change all of logging instance become: private Log log = LogFactory.getLog(Foo.class); I don't know how to load basic configurator using ACL. so I can't control debug level to my JUnit test. do you have any suggestion, without changing static Logger logger = Logger.getLogger(SomeClass.class); become static Logger logger = Logger.getLogger(SomeClass.class);

    Read the article

  • Five Key Strategies in Master Data Management

    - by david.butler(at)oracle.com
    Here is a very interesting Profit Magazine article on MDM: A recent customer survey reveals the deleterious effects of data fragmentation. by Trevor Naidoo, December 2010   Across industries and geographies, IT organizations have grown in complexity, whether due to mergers and acquisitions, or decentralized systems supporting functional or departmental requirements. With systems architected over time to support unique, one-off process needs, they are becoming costly to maintain, and the Internet has only further added to the complexity. Data fragmentation has become a key inhibitor in delivering flexible, user-friendly systems. The Oracle Insight team conducted a survey assessing customers' master data management (MDM) capabilities over the past two years to get a sense of where they are in terms of their capabilities. The responses, by 27 respondents from six different industries, reveal five key areas in which customers need to improve their data management in order to get better financial results. 1. Less than 15 percent of organizations surveyed understand the sources and quality of their master data, and have a roadmap to address missing data domains. Examples of the types of master data domains referred to are customer, supplier, product, financial and site. Many organizations have multiple sources of master data with varying degrees of data quality in each source -- customer data stored in the customer relationship management system is inconsistent with customer data stored in the order management system. Imagine not knowing how many places you stored your customer information, and whether a customer's address was the most up to date in each source. In fact, more than 55 percent of the respondents in the survey manage their data quality on an ad-hoc basis. It is important for organizations to document their inventory of data sources and then profile these data sources to ensure that there is a consistent definition of key data entities throughout the organization. Some questions to ask are: How do we define a customer? What is a product? How do we define a site? The goal is to strive for one common repository for master data that acts as a cross reference for all other sources and ensures consistent, high-quality master data throughout the organization. 2. Only 18 percent of respondents have an enterprise data management strategy to ensure that data is treated as an asset to the organization. Most respondents handle data at the department or functional level and do not have an enterprise view of their master data. The sales department may track all their interactions with customers as they move through the sales cycle, the service department is tracking their interactions with the same customers independently, and the finance department also has a different perspective on the same customer. The salesperson may not be aware that the customer she is trying to sell to is experiencing issues with existing products purchased, or that the customer is behind on previous invoices. The lack of a data strategy makes it difficult for business users to turn data into information via reports. Without the key building blocks in place, it is difficult to create key linkages between customer, product, site, supplier and financial data. These linkages make it possible to understand patterns. A well-defined data management strategy is aligned to the business strategy and helps create the governance needed to ensure that data stewardship is in place and data integrity is intact. 3. Almost 60 percent of respondents have no strategy to integrate data across operational applications. Many respondents have several disparate sources of data with no strategy to keep them in sync with each other. Even though there is no clear strategy to integrate the data (see #2 above), the data needs to be synced and cross-referenced to keep the business processes running. About 55 percent of respondents said they perform this integration on an ad hoc basis, and in many cases, it is done manually with the help of Microsoft Excel spreadsheets. For example, a salesperson needs a report on global sales for a specific product, but the product has different product numbers in different countries. Typically, an analyst will pull all the data into Excel, manually create a cross reference for that product, and then aggregate the sales. The exact same procedure has to be followed if the same report is needed the following month. A well-defined consolidation strategy will ensure that a central cross-reference is maintained with updates in any one application being propagated to all the other systems, so that data is synchronized and up to date. This can be done in real time or in batch mode using integration technology. 4. Approximately 50 percent of respondents spend manual efforts cleansing and normalizing data. Information stored in various systems usually follows different standards and formats, making it difficult to match the data. A customer's address can be stored in different ways using a variety of abbreviations -- for example, "av" or "ave" for avenue. Similarly, a product's attributes can be stored in a number of different ways; for example, a size attribute can be stored in inches and can also be entered as "'' ". These types of variations make it difficult to match up data from different sources. Today, most customers rely on manual, heroic efforts to match, cleanse, and de-duplicate data -- clearly not a scalable, sustainable model. To solve this challenge, organizations need the ability to standardize data for customers, products, sites, suppliers and financial accounts; however, less than 10 percent of respondents have technology in place to automatically resolve duplicates. It is no wonder, therefore, that we get communications about products we don't own, at addresses we don't reside, and using channels (like direct mail) we don't like. An all-too-common example of a potential challenge follows: Customers end up receiving duplicate communications, which not only impacts customer satisfaction, but also incurs additional mailing costs. Cleansing, normalizing, and standardizing data will help address most of these issues. 5. Only 10 percent of respondents have the ability to share data that was mastered in a master data hub. Close to 60 percent of respondents have efforts in place that profile, standardize and cleanse data manually, and the output of these efforts are stored in spreadsheets in various parts of the organization. This valuable information is not easily shared with the rest of the organization and, more importantly, this enriched information cannot be sent back to the source systems so that the data is fixed at the source. A key benefit of a master data management strategy is not only to clean the data, but to also share the data back to the source systems as well as other systems that need the information. Aside from the source systems, another key beneficiary of this data is the business intelligence system. Having clean master data as input to business intelligence systems provides more accurate and enhanced reporting.  Characteristics of Stellar MDM When deciding on the right master data management technology, organizations should look for solutions that have four main characteristics: enterprise-grade MDM performance complete technology that can be rapidly deployed and addresses multiple business issues end-to-end MDM process management with data quality monitoring and assurance pre-built MDM business relevant applications with data stores and workflows These master data management capabilities will aid in moving closer to a best-practice maturity level, delivering tremendous efficiencies and savings as well as revenue growth opportunities as a result of better understanding your customers.  Trevor Naidoo is a senior director in Industry Strategy and Insight at Oracle. 

    Read the article

  • ApiChange Is Released!

    - by Alois Kraus
    I have been working on little tool to simplify my life and perhaps yours as developer as well. It is basically a command line tool that allows you to execute queries on your compiled .NET code base. The main purpose is to find out how big the impact of an api change would be if you changed this or that.  Now you can do high level operations like Diff public types for breaking changes. Who uses a method? Who uses a type? Who uses implements an interface? Who references me? What format has the binary  (32/64, Managed C++, Pure IL, Unmanaged)? Search for all event subscribers and unsubscribers. A unique feature is to check for event subscription imbalances. Forgotten event subscriptions are the 90% cause of managed memory leaks. It is done at a per class level. If one class does subscribe to one event more often than it does unsubscribe it is treated as possible event subscription imbalance. Another unique ability is to search for users of string literals which allows you to track users of a string constant which is not possible otherwise. For incremental builds the ShowRebuildTargets command can be used to identify the dependant targets that need a rebuild after you did compile one assembly. It has some heuristics in place to determine the impact of breaking changes and finds out which targets need to be recompiled as well. It has a ton of other features and a an API to access these things programmatically so you can build upon these simple queries create even better tools. Perhaps we get a Visual Studio plug in? You can download it from CodePlex here. It works via XCopy deployment. Simply let it run and check the command line help out. The best feature in my opinion is that the output of nearly all commands can be piped to Excel for further analysis. Since it does read also the pdbs it can show you the source file name and line number as well for all matches. The following picture shows the output of a –WhousesType query. The following command checks where type from BaseLibraryV1.dll are used inside DependantLibV1.dll. All matches are printed out with the reason and matching item along with file and line number. There is even a hyper link to the match which will open Visual Studio. ApiChange -whousestype "*" BaseLibraryV1.dll -in DependantLibV1.dll –excel The "*” is the actual query which means all types. The syntax is the same like in C# just that placeholders are allowed ;-). More info's can be found at the Codeplex Documentation.     The tool was developed in a TDD style manner which means that it is heavily tested and already used by a quite large user base inside the company I do work for. Luckily for you I got the permission to make it public so you take advantage of it. It is fully instrumented with tracing. If you find bugs simply add the –trace command line switch to find out what is failing and send me the output. How is it done? Your first guess might be that it uses reflection. Wrong. It is based on Mono Cecil a free IL parser with a fantastic API to access all internals of a managed assembly. The speed is awesome and to make it even faster I did make the tool heavily multi threaded. The query above did execute in 1.8s with the Excel output. On a rather slow machine I can analyze over 1500 assemblies in less than 40s with a very low memory consumption. The true power of Mono Cecil is that I can load an assembly like any other data file. I have no problems unloading a file but if I would have used reflection I would need to unload a whole AppDomain just to get rid of one assembly in my memory. Just to give you a glimpse how ApiChange.Api.dll can be used I show you one of the unit tests:           public void Can_Find_GenericMethodInvocations_With_Type_Parameters()         { // 1. Create an aggregator to collect our matches             UsageQueryAggregator agg = new UsageQueryAggregator();   // 2. This is the type we want to search for. Load it via the type query             var decimalType = TypeQuery.GetTypeByName(TestConstants.MscorlibAssembly, "System.Decimal");   // 3. register the type query which searches for uses of the Decimal type             new WhoUsesType(agg, decimalType);   // 4. Search for all users of the Decimal type in the DependandLibV1Assembly             agg.Analyze(TestConstants.DependandLibV1Assembly);   // Extract matches and assert             Assert.AreEqual(2, agg.MethodMatches.Count, "Method match count");             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[0].Match.Name);             Assert.AreEqual("UseGenericMethod", agg.MethodMatches[1].Match.Name);         } Many thanks go from here to Jb Evian for the creation of Mono.Cecil. Without this fantastic piece of code it would have been much much harder. There are other options around like the Common Compiler Infrastructure  Metadata Api which should do the same thing but it was not a real option since the Microsoft reader did fail on even simple assemblies (at least in September 2009 this was the case). Besides this I found the CCI Apis much harder to use. The only real competitor was Reflector which does support many things but does not let me access his cool high level analyze commands. So I decided to dig into the IL specs and as a result you can query your compiled binaries from the command line or programmatically. The best thing is you try it out for yourself and give me some feedback what you miss. If you want to contribute or have a cool idea what should be added drop me a mail at A Kraus1@___No [email protected]. There is much more inside the tool I did not talk about it (yet).

    Read the article

  • Is there a Java API for creating a XHTML document?

    - by Bedwyr Humphreys
    I want to provide a simple XHTML representation of each of the resources in a REST web service. At the moment i'm using a StringBuilder to generate these which is both tedious and error prone. I don't see these changing after I publish the service but the process of coding each is a bit painful. Is there a XHTML document writer api? Should I just use an XML writer? Which one? Should I just roll my own basic HTML document class - doctype is the same each time, i just need to set the title, metatags and body content, most of which (but not all content) is already in HTML for the GETs. Or should I just use StringBuilder and stop whining? ;) Thanks.

    Read the article

  • Tomcat blankpage for default error page

    - by praspa
    First off, I'm using Tomcat 5.5 and my .jsp's live in /webapps/foo/bar/*.jsp. I followed the directions here to set up a default 404 error page. In my TOMCAT_HOME/conf/web.xml I entered: <error-page> <error-code>404</error-code> <location>/error.html</location> </error-page> I dropped copies of a test error.html file into each of the dirs (I wasn't sure where /error.html was referring to): /webapps/ /webapps/foo/ /webapps/foo/bar/ Whenever I attempt to access a non-existent page in a browser at url's /foo/missingpage.html or /foo/bar/missingpage.html I'm redirected to my error page that exists in /foo/error.html. However, attempting to access a non-existent page in a browser at url /missingpage.html yields a blankpage. Or any permutation of /missingDir/missingfile.html will also yield a blank page. Any suggestions? Am I missing some extra configuration? Thanks PR

    Read the article

  • How to Compile Sample Code

    - by James L
    I'm breaking into GUI programming with android, trying to compile and analyze Lunar Lander sample program. The instructions for using Eclipse say to select "Create project from existing source" but that option doesn't exist. If I select File-New-Project I can select "Java project from Existing Ant Buildfile". Using that I've tried selecting various xml files as "Ant Buildfile" but all give me the "The file selected is not a valid Ant buildfile" error. I just want to run GUI sample projects, preferably with Eclipse. Any useful tips will be appreciated.

    Read the article

  • Alternative for PHPlivedocx?

    - by Derek
    Hello, Is there any other free alternatives to PHPliveDocx? I would like to create a Word document based on templates and user inputted data. The template will resides on the server and the client(online or c# windows application) will be used to collect user input. Once data has been collected, server-side script(PHP) will be used to generate a Word document. So far I have only found phplivedocx. However I'm not very comfortable at consuming web services from a server that I have no control over with (am I worrying too much?). I have also thought about using client to do the work (c# windows application, Open Office XML). But I'm not sure that's the right way of doing things. Any guidance/help will be really appreciated! Thanks!

    Read the article

  • Can I use the whatthetrend.com API to get daily and weekly twitter trends?

    - by Charles S.
    The twitter API allows me to receive daily trends and weekly trends (in addition to current trends, of course) as either JSON or XML. Is there an equivalent with the whatthetrend.com API? I don't see a method/parameter at first glance but I wanted to see if anyone out there knew a way... http://api.whatthetrend.com/api There's a method to lookup trend definition by keyword so I guess I could use that based off the Twitter API but I'd rather just load all the data at once rather than have to remotely access the API every time I want to look up a definition. Thanks

    Read the article

  • XDS file, where to get xmlns argument?

    - by Daok
    <?xml version="1.0" encoding="utf-8"?> <xs:schema id="abc" targetNamespace="http://schemas.businessNameHere.com/SoftwareNameHere" elementFormDefault="qualified" xmlns="http://schemas.businessNameHere.com/SoftwareNameHere" xmlns:mstns="http://schemas.businessNameHere.com/SoftwareNameHere" xmlns:xs="http://www.w3.org/2001/XMLSchema"> <xs:element name="..." type="..." /> <xs:complexType name="..."> I am working on a project using XSD to generate .cs file. My question is concerning the string "http://schemas.businessNameHere.com/SoftwareNameHere" If I change it, it doesn't work. But the http:// is not a valid one... what is the logic behind and where can I can information about what to put there or how to change it?

    Read the article

  • DBA Best Practices - A Blog Series: Episode 1 - Backups

    - by Argenis
      This blog post is part of the DBA Best Practices series, on which various topics of concern for daily database operations are discussed. Your feedback and comments are very much welcome, so please drop by the comments section and be sure to leave your thoughts on the subject. Morning Coffee When I was a DBA, the first thing I did when I sat down at my desk at work was checking that all backups had completed successfully. It really was more of a ritual, since I had a dual system in place to check for backup completion: 1) the scheduled agent jobs to back up the databases were set to alert the NOC in failure, and 2) I had a script run from a central server every so often to check for any backup failures. Why the redundancy, you might ask. Well, for one I was once bitten by the fact that database mail doesn't work 100% of the time. Potential causes for failure include issues on the SMTP box that relays your server email, firewall problems, DNS issues, etc. And so to be sure that my backups completed fine, I needed to rely on a mechanism other than having the servers do the taking - I needed to interrogate the servers and ask each one if an issue had occurred. This is why I had a script run every so often. Some of you might have monitoring tools in place like Microsoft System Center Operations Manager (SCOM) or similar 3rd party products that would track all these things for you. But at that moment, we had no resort but to write our own Powershell scripts to do it. Now it goes without saying that if you don't have backups in place, you might as well find another career. Your most sacred job as a DBA is to protect the data from a disaster, and only properly safeguarded backups can offer you peace of mind here. "But, we have a cluster...we don't need backups" Sadly I've heard this line more than I would have liked to. You need to understand that a cluster is comprised of shared storage, and that is precisely your single point of failure. A cluster will protect you from an issue at the Operating System level, and also under an outage of any SQL-related service or dependent devices. But it will most definitely NOT protect you against corruption, nor will it protect you against somebody deleting data from a table - accidentally or otherwise. Backup, fine. How often do I take a backup? The answer to this is something you will hear frequently when working with databases: it depends. What does it depend on? For one, you need to understand how much data your business is willing to lose. This is what's called Recovery Point Objective, or RPO. If you don't know how much data your business is willing to lose, you need to have an honest and realistic conversation about data loss expectations with your customers, internal or external. From my experience, their first answer to the question "how much data loss can you withstand?" will be "zero". In that case, you will need to explain how zero data loss is very difficult and very costly to achieve, even in today's computing environments. Do you want to go ahead and take full backups of all your databases every hour, or even every day? Probably not, because of the impact that taking a full backup can have on a system. That's what differential and transaction log backups are for. Have I answered the question of how often to take a backup? No, and I did that on purpose. You need to think about how much time you have to recover from any event that requires you to restore your databases. This is what's called Recovery Time Objective. Again, if you go ask your customer how long of an outage they can withstand, at first you will get a completely unrealistic number - and that will be your starting point for discussing a solution that is cost effective. The point that I'm trying to get across is that you need to have a plan. This plan needs to be practiced, and tested. Like a football playbook, you need to rehearse the moves you'll perform when the time comes. How often is up to you, and the objective is that you feel better about yourself and the steps you need to follow when emergency strikes. A backup is nothing more than an untested restore Backups are files. Files are prone to corruption. Put those two together and realize how you feel about those backups sitting on that network drive. When was the last time you restored any of those? Restoring your backups on another box - that, by the way, doesn't have to match the specs of your production server - will give you two things: 1) peace of mind, because now you know that your backups are good and 2) a place to offload your consistency checks with DBCC CHECKDB or any of the other DBCC commands like CHECKTABLE or CHECKCATALOG. This is a great strategy for VLDBs that cannot withstand the additional load created by the consistency checks. If you choose to offload your consistency checks to another server though, be sure to run DBCC CHECKDB WITH PHYSICALONLY on the production server, and if you're using SQL Server 2008 R2 SP1 CU4 and above, be sure to enable traceflags 2562 and/or 2549, which will speed up the PHYSICALONLY checks further - you can read more about this enhancement here. Back to the "How Often" question for a second. If you have the disk, and the network latency, and the system resources to do so, why not backup the transaction log often? As in, every 5 minutes, or even less than that? There's not much downside to doing it, as you will have to clear the log with a backup sooner than later, lest you risk running out space on your tlog, or even your drive. The one drawback to this approach is that you will have more files to deal with at restore time, and processing each file will add a bit of extra time to the entire process. But it might be worth that time knowing that you minimized the amount of data lost. Again, test your plan to make sure that it matches your particular needs. Where to back up to? Network share? Locally? SAN volume? This is another topic where everybody has a favorite choice. So, I'll stick to mentioning what I like to do and what I consider to be the best practice in this regard. I like to backup to a SAN volume, i.e., a drive that actually lives in the SAN, and can be easily attached to another server in a pinch, saving you valuable time - you wouldn't need to restore files on the network (slow) or pull out drives out a dead server (been there, done that, it’s also slow!). The key is to have a copy of those backup files made quickly, and, if at all possible, to a remote target on a different datacenter - or even the cloud. There are plenty of solutions out there that can help you put such a solution together. That right there is the first step towards a practical Disaster Recovery plan. But there's much more to DR, and that's material for a different blog post in this series.

    Read the article

  • Run Exchange Management Shell cmdlets from Visual Basic/C#/.NET app

    - by nowarninglabel
    Goal: Provide a web service using Visual Basic or C# or .NET that interacts with the Exchange Management Shell, sending it commands to run cmdlets, and return the results as XML. (Note that we could use any lanaguage to write the service, but since it is a Windows Box and we have Visual Studio 2008, it seemed like easiest solution would be just use it to create a VB/.NET web service. Indeed, it was quite easy to do so, just point and click.) Problem: How to run an Exchange Management Shell cmdlet from the web service, e.g, Get-DistributionGroupMember "Live Presidents" Seems that we should be able to create a PowerShell script that runs the cmdlet, and be able to call that from the command line, and thus just call it from within the program. Does this sound correct? If so how would I go about this? Thanks. Answer can be language agnostic, but Visual Basic would probably be best since that is what I loaded the test web service up in.

    Read the article

  • DynamicMethod NullReferenceException

    - by Jeff
    Can anyone tell me what's wrong with my IL code here? IL_0000: nop IL_0001: ldarg.1 IL_0002: isinst MyXmlWriter IL_0007: stloc.0 IL_0008: ldloc.0 IL_0009: ldarg.2 IL_000a: ldind.ref IL_000b: unbox.any TestEnum IL_0010: ldfld Int64 value__/FastSerializer.TestEnum IL_0015: callvirt Void WriteValue(Int64)/System.Xml.XmlWriter IL_001a: nop IL_001b: ret I'm going crazy here, since I've written a test app which does the same thing as above, but in C#, and in reflector the IL code from that looks just like my DynamicMethod's IL code above (except my test C# app uses a TestStruct with a public field instead of the private value field on the enum above, but I have skipVisibility set to true)... I get a NullReferenceException. My DynamicMethod's signature is: public delegate void DynamicWrite(IMyXmlWriter writer, ref object value, MyContract contract); And I'm definitely not passing in anything null. Thanks in advance!

    Read the article

  • Windows Azure Virtual Machine Readiness and Capacity Assessment for SQL Server

    - by SQLOS Team
    Windows Azure Virtual Machine Readiness and Capacity Assessment for Windows Server Machine Running SQL Server With the release of MAP Toolkit 8.0 Beta, we have added a new scenario to assess your Windows Azure Virtual Machine Readiness. The MAP 8.0 Beta performs a comprehensive assessment of Windows Servers running SQL Server to determine you level of readiness to migrate an on-premise physical or virtual machine to Windows Azure Virtual Machines. The MAP Toolkit then offers suggested changes to prepare the machines for migration, such as upgrading the operating system or SQL Server. MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Now, let’s walk through the MAP Toolkit task for completing the Windows Azure Virtual Machine assessment and capacity planning. The tasks include the following: Perform an inventory View the Windows Azure VM Readiness results and report Collect performance data for determine VM sizing View the Windows Azure Capacity results and report Perform an inventory: 1. To perform an inventory against a single machine or across a complete environment, choose Perform an Inventory to launch the Inventory and Assessment Wizard as shown below: 2. After the Inventory and Assessment Wizard launches, select either the Windows computers or SQL Server scenario to inventory Windows machines. HINT: If you don’t care about completely inventorying a machine, just select the SQL Server scenario. Click Next to Continue. 3. On the Discovery Methods page, select how you want to discover computers and then click Next to continue. Description of Discovery Methods: Use Active Directory Domain Services -- This method allows you to query a domain controller via the Lightweight Directory Access Protocol (LDAP) and select computers in all or specific domains, containers, or OUs. Use this method if all computers and devices are in AD DS. Windows networking protocols --  This method uses the WIN32 LAN Manager application programming interfaces to query the Computer Browser service for computers in workgroups and Windows NT 4.0–based domains. If the computers on the network are not joined to an Active Directory domain, use only the Windows networking protocols option to find computers. System Center Configuration Manager (SCCM) -- This method enables you to inventory computers managed by System Center Configuration Manager (SCCM). You need to provide credentials to the System Center Configuration Manager server in order to inventory the managed computers. When you select this option, the MAP Toolkit will query SCCM for a list of computers and then MAP will connect to these computers. Scan an IP address range -- This method allows you to specify the starting address and ending address of an IP address range. The wizard will then scan all IP addresses in the range and inventory only those computers. Note: This option can perform poorly, if many IP addresses aren’t being used within the range. Manually enter computer names and credentials -- Use this method if you want to inventory a small number of specific computers. Import computer names from a files -- Using this method, you can create a text file with a list of computer names that will be inventoried. 4. On the All Computers Credentials page, enter the accounts that have administrator rights to connect to the discovered machines. This does not need to a domain account, but needs to be a local administrator. I have entered my domain account that is an administrator on my local machine. Click Next after one or more accounts have been added. NOTE: The MAP Toolkit primarily uses Windows Management Instrumentation (WMI) to collect hardware, device, and software information from the remote computers. In order for the MAP Toolkit to successfully connect and inventory computers in your environment, you have to configure your machines to inventory through WMI and also allow your firewall to enable remote access through WMI. The MAP Toolkit also requires remote registry access for certain assessments. In addition to enabling WMI, you need accounts with administrative privileges to access desktops and servers in your environment. 5. On the Credentials Order page, select the order in which want the MAP Toolkit to connect to the machine and SQL Server. Generally just accept the defaults and click Next. 6. On the Enter Computers Manually page, click Create to pull up at dialog to enter one or more computer names. 7. On the Summary page confirm your settings and then click Finish. After clicking Finish the inventory process will start, as shown below: Windows Azure Readiness results and report After the inventory progress has completed, you can review the results under the Database scenario. On the tile, you will see the number of Windows Server machine with SQL Server that were analyzed, the number of machines that are ready to move without changes and the number of machines that require further changes. If you click this Azure VM Readiness tile, you will see additional details and can generate the Windows Azure VM Readiness Report. After the report is generated, select View | Saved Reports and Proposals to view the location of the report. Open up WindowsAzureVMReadiness* report in Excel. On the Windows tab, you can see the results of the assessment. This report has a column for the Operating System and SQL Server assessment and provides a recommendation on how to resolve, if there a component is not supported. Collect Performance Data Launch the Performance Wizard to collect performance information for the Windows Server machines that you would like the MAP Toolkit to suggest a Windows Azure VM size for. Windows Azure Capacity results and report After the performance metrics are collected, the Azure VM Capacity title will display the number of Virtual Machine sizes that are suggested for the Windows Server and Linux machines that were analyzed. You can then click on the Azure VM Capacity tile to see the capacity details and generate the Windows Azure VM Capacity Report. Within this report, you can view the performance data that was collected and the Virtual Machine sizes.   MAP Toolkit 8.0 Beta is available for download here Your participation and feedback is very important to make the MAP Toolkit work better for you. We encourage you to participate in the beta program and provide your feedback at [email protected] or through one of our surveys. Useful References: Windows Azure Homepage How to guides for Windows Azure Virtual Machines Provisioning a SQL Server Virtual Machine on Windows Azure Windows Azure Pricing     Peter Saddow Senior Program Manager – MAP Toolkit Team

    Read the article

< Previous Page | 621 622 623 624 625 626 627 628 629 630 631 632  | Next Page >