Search Results

Search found 48797 results on 1952 pages for 'read write'.

Page 1040/1952 | < Previous Page | 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047  | Next Page >

  • How to convert XML to JSON in Python?

    - by Geuis
    I'm doing some work on App Engine and I need to convert an XML document being retrieved from a remote server into an equivalent JSON object. I'm using xml.dom.minidom to parse the XML data being returned by urlfetch. I'm also trying to use django.utils.simplejson to convert the parsed XML document into JSON. I'm completely at a loss as to how to hook the two together. Below is the code I more or less have been tinkering with. If anyone can put A & B together, I would be SO greatful. I'm freaking lost. from xml.dom import minidom from django.utils import simplejson as json #pseudo code that returns actual xml data as a string from remote server. result = urlfetch.fetch(url,'','get'); dom = minidom.parseString(result.content) json = simplejson.load(dom) self.response.out.write(json)

    Read the article

  • How to run assembly using NUnit's TestSuiteBuilder

    - by Dror Helper
    I'm trying to write a simple method that receives a file an runs it using NUnit. The code I managed to build using NUnit's source does not work: if(openFileDialog1.ShowDialog() != DialogResult.OK) { return; } var builder = new TestSuiteBuilder(); var testPackage = new TestPackage(openFileDialog1.FileName); var directoryName = Path.GetDirectoryName(openFileDialog1.FileName); testPackage.BasePath = directoryName; var suite = builder.Build(testPackage); TestResult result = suite.Run(new NullListener(), TestFilter.Empty); The problem is that I keep getting an exception thrown by builder.Build stating that the assembly was not found. What am I missing?

    Read the article

  • How to generate and print large XPS documents in WPF?

    - by bitbonk
    I would like to generate (and then print or save) big XPS documents (400 pages) from my WPF application. We have some large amount of in-memory data that needs to be written to XPS. How can this be done without getting an OutOfMemoryException? Is there a way I can write the document in chunks? How is this usually done? Should I not be using XPS for large files in the first place? The root cause of the OutOfMemoryException seems to be the creation of the huge FlowDocument. I am creating the full FlowDocument and then sending it to the XPS document writer. Is this the wrong approach?

    Read the article

  • C# iterator for async file copy

    - by uno
    Been running in circles to find the best solution for my client. We have a server that images are uploaded via ftp. I want to write an application that scans this server at frequent intervals and if it finds files it copies them to another,processing server. So if during a time cycle, my app finds that there are 100 files, i want to start copying as many files as i can across to the processing server i figured delegates would be the way to go but now i come across iterators...what do the experts say?

    Read the article

  • Removing Left Recursion in ANTLR

    - by prosseek
    As is explained in http://stackoverflow.com/questions/2652060/removing-left-recursion , there are two ways to remove the left recursion. Modify the original grammar to remove the left recursion using some procedure Write the grammar originally not to have the left recursion What people normally use for removing (not having) the left recursion with ANTLR? I've used flex/bison for parser, but I need to use ANTLR. The only thing I'm concerned about using ANTLR (or LL parser in genearal) is left recursion removal. In practical sense, how serious of removing left recursion in ANTLR? Is this a showstopper in using ANTLR? Or, nobody cares about it in ANTLR community? I like the idea of AST generation of ANTLR. In terms of getting AST quick and easy way, which method (out of the 2 removing left recursion methods) is preferable?

    Read the article

  • PHP MySQLi timeout not working

    - by Marcin
    Hi guys I have a weird problem with mysqli timeout options, here you go: I am using mysqli_init() and real_connect() in order to set MYSQLI_OPT_CONNECT_TIMEOUT $this->__mysqli = mysqli_init(); if(!$this->__mysqli->options(MYSQLI_OPT_CONNECT_TIMEOUT,1)) throw new Exception('Timeout settings failed') $this->__mysqli->real_connect(host,user,pass,db); .... Then I am initiating query on locked table (LOCKE TABLE users WRITE) and its just hanging, ignoring all my settings even: set_time_limit(1); ini_set('max_execution_time',1); ini_set('default_socket_timeout',1); ini_set('mysql.connect_timeout',1); I understand why set_time_limit(1) and max_execution_time is ignored but why other timeouts and especially MYSQLI_OPT_CONNECT_TIMEOUT are ignored and how to solve it. I am using PHP 5.3.1 on Windows and Linux boxes, please help.

    Read the article

  • UITextView autoscroll to last line

    - by MihaiD
    Hi *, When writing in a UITextView more text than can fit entirely inside it, the text will scroll up and the cursor will often place itself one or two lines above the view's bottom line. This is a bit frustrating as I want my application to make good use of the entire height of the text view. Basically what I want is to configure the UITextView to write up to it's lowest part and not use it just for scrolling. I've seen some similar questions here, here and here. However I've not seen a proper solution yet. Thanks

    Read the article

  • How can I extend DynamicQuery.cs to implement a .Single method?

    - by Yoenhofen
    I need to write some dynamic queries for a project I'm working on. I'm finding out that a significant amount of time is being spent by my program on the Count and First methods, so I started to change to .Single, only to find out that there is no such method. The code below was my first attempt at creating one (mostly copied from the Where method), but it's not working. Help? public static object Single(this IQueryable source, string predicate, params object[] values) { if (source == null) throw new ArgumentNullException("source"); if (predicate == null) throw new ArgumentNullException("predicate"); LambdaExpression lambda = DynamicExpression.ParseLambda(source.ElementType, typeof(bool), predicate, values); return source.Provider.CreateQuery( Expression.Call( typeof(Queryable), "Single", new Type[] { source.ElementType }, source.Expression, Expression.Quote(lambda))); }

    Read the article

  • JavaScript - get detailed information about the browser

    - by iconiK
    Basically I'm looking for something to give me easy access to information like useragentstring.com, but in JS, without me parsing the user agent and looking for each possible bit of text. The object could be something like this: browser = UserAgent.Browser; // Chrome browserVer = UserAgent.BrowserVersion; // 5.0.342.9 os = UserAgent.OperatingSystem; // Windows NT osVer = UserAgent.OperatingSystemVersion; // 6.1 layoutEng = UserAgent.LayoutEngine; // WebKit layoutEngVer = UserAgent.LayoutEngineVersion; // 533.2 Does something similar to that exist or do I have to write one myself? Writing yet another user agent parser doesn't seem that easy with all those impersonations going back to the dark ages of the web. Specifically I'm looking for something that doesn't just split the user agent into parts and give them to me, because that's as useless as the user agent itself; instead it should parse the user agent and recognize the engine, browser, OS, etc. and return the concrete parts only, as in the example.

    Read the article

  • json support in visual studio 2010

    - by jnsohnumr
    Hi, I'm trying to work on a new project parsing some JSON data for a Silverlight 4 project (specifically created as a "Silverlight Business Application - Visual C#" project) using C# in Visual Studio 2010, and I can't find how to include the references to have parsers and native object support for JSON data. As far as I know my development tools are up to date (checked MS update). I know that I can probably just write my own parser but that seems like re-inventing the wheel. Below are some lines that work in VS 2008 in another project of ours (can't post the files due to their being part of a business app): using System.Json; results = (JsonObject)JsonObject.Load(e.Result); I hope my description is adequate. Thanks for looking, jnsohnumr

    Read the article

  • Continuous Integration for SQL Server Part II – Integration Testing

    - by Ben Rees
    My previous post, on setting up Continuous Integration for SQL Server databases using GitHub, Bamboo and Red Gate’s tools, covered the first two parts of a simple Database Continuous Delivery process: Putting your database in to a source control system, and, Running a continuous integration process, each time changes are checked in. However there is, of course, a lot more to to Continuous Delivery than that. Specifically, in addition to the above: Putting some actual integration tests in to the CI process (otherwise, they don’t really do much, do they!?), Deploying the database changes with a managed, automated approach, Monitoring what you’ve just put live, to make sure you haven’t broken anything. This post will detail how to set up a very simple pipeline for implementing the first of these (continuous integration testing). NB: A lot of the setup in this post is built on top of the configuration from before, so it might be difficult to implement this post without running through part I first. There’ll then be a third post on automated database deployment followed by a final post dealing with the last item – monitoring changes on the live system. In the previous post, I used a mixture of Red Gate products and other 3rd party software – GitHub and Atlassian Bamboo specifically. This was partly because I believe most people work in an heterogeneous environment, using software from different vendors to suit their purposes and I wanted to show how this could work for this process. For example, you could easily substitute Atlassian’s BitBucket or Stash for GitHub, depending on your needs, or use an alternative CI server such as TeamCity, TFS or Jenkins. However, in this, post, I’ll be mostly using Red Gate products only (other than tSQLt). I would do this, firstly because I work for Red Gate. However, I also think that in the area of Database Delivery processes, nobody else has the offerings to implement this process fully – so I didn’t have any choice!   Background on Continuous Delivery For me, a great source of information on what makes a proper Continuous Delivery process is the Jez Humble and David Farley classic: Continuous Delivery – Reliable Software Releases through Build, Test, and Deployment Automation This book is not of course, primarily about databases, and the process I outline here and in the previous article is a gross simplification of what Jez and David describe (not least because it’s that much harder for databases!). However, a lot of the principles that they describe can be equally applied to database development and, I would argue, should be. As I say however, what I describe here is a very simple version of what would be required for a full production process. A couple of useful resources on handling some of these complexities can be found in the following two references: Refactoring Databases – Evolutionary Database Design, by Scott J Ambler and Pramod J. Sadalage Versioning Databases – Branching and Merging, by Scott Allen In particular, I don’t deal at all with the issues of multiple branches and merging of those branches, an issue made particularly acute by the use of GitHub. The other point worth making is that, in the words of Martin Fowler: Continuous Delivery is about keeping your application in a state where it is always able to deploy into production.   I.e. we are not talking about continuously delivery updates to the production database every time someone checks in an amendment to a stored procedure. That is possible (and what Martin calls Continuous Deployment). However, again, that’s more than I describe in this article. And I doubt I need to remind DBAs or Developers to Proceed with Caution!   Integration Testing Back to something practical. The next stage, building on our set up from the previous article, is to add in some integration tests to the process. As I say, the CI process, though interesting, isn’t enormously useful without some sort of test process running. For this we’ll use the tSQLt framework, an open source framework designed specifically for running SQL Server tests. tSQLt is part of Red Gate’s SQL Test found on http://www.red-gate.com/products/sql-development/sql-test/ or can be downloaded separately from www.tsqlt.org - though I’ll provide a step-by-step guide below for setting this up. Getting tSQLt set up via SQL Test Click on the link http://www.red-gate.com/products/sql-development/sql-test/ and click on the blue Download button to download the Red Gate SQL Test product, if not already installed. Follow the install process for SQL Test to install the SQL Server Management Studio (SSMS) plugin on to your machine, if not already installed. Open SSMS. You should now see SQL Test under the Tools menu:   Clicking this link will give you the basic SQL Test dialogue: As yet, though we’ve installed the SQL Test product we haven’t yet installed the tSQLt test framework on to any particular database. To do this, we need to add our RedGateApp database using this dialogue, by clicking on the + Add Database to SQL Test… link, selecting the RedGateApp database and clicking the Add Database link:   In the next screen, SQL Test describes what will be installed on the database for the tSQLt framework. Also in this dialogue, uncheck the “Add SQL Cop tests” option (shown below). SQL Cop is a great set of pre-defined tests that work within the tSQLt framework to check the general health of your SQL Server database. However, we won’t be using them in this particular simple example: Once you’ve clicked on the OK button, the changes described in the dialogue will be made to your database. Some of these are shown in the left-hand-side below: We’ve now installed the framework. However, we haven’t actually created any tests, so this will be the next step. But, before we proceed, we’ve made an update to our database so should, again check this in to source control, adding comments as required:   Also worth a quick check that your build still runs with the new additions!: (And a quick check of the RedGateAppCI database shows that the changes have been made).   Creating and Testing a Unit Test There are, of course, a lot of very interesting unit tests that you could and should set up for a database. The great thing about the tSQLt framework is that you can write these in SQL. The example I’m going to use here is pretty Mickey Mouse – our database table is going to include some email addresses as reference data and I want to check whether these are all in a correct email format. Nothing clever but it illustrates the process and hopefully shows the method by which more interesting tests could be set up. Adding Reference Data to our Database To start, I want to add some reference data to my database, and have this source controlled (as well as the schema). First of all I need to add some data in to my solitary table – this can be done a number of ways, but I’ll do this in SSMS for simplicity: I then add some reference data to my table: Currently this reference data just exists in the database. For proper integration testing, this needs to form part of the source-controlled version of the database – and so needs to be added to the Git repository. This can be done via SQL Source Control, though first a Primary Key needs to be added to the table. Right click the table, select Design, then right-click on the first “id” row. Then click on “Set Primary Key”: NB: once this change is made, click Save to save the change to the table. Then, to source control this reference data, right click on the table (dbo.Email) and selecting the following option:   In the next screen, link the data in the Email table, by selecting it from the list and clicking “save and close”: We should at this point re-commit the changes (both the addition of the Primary Key, and the data) to the Git repo. NB: From here on, I won’t show screenshots for the GitHub side of things – it’s the same each time: whenever a change is made in SQL Source Control and committed to your local folder, you then need to sync this in the GitHub Windows client (as this is where the build server, Bamboo is taking it from). An interesting point to note here, when these changes are committed in SQL Source Control (right-click database and select “Commit Changes to Source Control..”): The display gives a warning about possibly needing a migration script for the “Add Primary Key” step of the changes. This isn’t actually necessary in this case, but this mechanism would allow you to create override scripts to replace the default change scripts created by the SQL Compare engine (which runs underneath SQL Source Control). Ignoring this message (!), we add a comment and commit the changes to Git. I then sync these, run a build (or the build gets run automatically), and check that the data is being deployed over to the target RedGateAppCI database:   Creating and Running the Test As I mention, the test I’m going to use here is a very simple one - are the email addresses in my reference table valid? This isn’t of course, a full test of email validation (I expect the email addresses I’ve chosen here aren’t really the those of the Fab Four) – but just a very basic check of format used. I’ve taken the relevant SQL from this Stack Overflow article. In SSMS select “SQL Test” from the Tools menu, then click on + New Test: In the next screen, give your new test a name, and also enter a name in the Test Class box (test classes are schemas that help you keep things organised). Also check that the database in which the test is going to be created is correct – RedGateApp in this example: Click “Create Test”. After closing a couple of subsequent dialogues, you’ll see a dummy script for the test, that needs filling in:   We now need to define the SQL for our test. As mentioned before, tSQLt allows you to write your unit tests in T-SQL, and the code I’m going to use here is as below. This needs to be copied and pasted in to the query window, to replace the default given by tSQLt: –  Basic email check test ALTER PROCEDURE [MyChecks].[test Check Email Addresses] AS BEGIN SET NOCOUNT ON         Declare @Output VarChar(max)     Set @Output = ”       SELECT  @Output = @Output + Email +Char(13) + Char(10) FROM dbo.Email WHERE email NOT LIKE ‘%_@__%.__%’       If @Output > ”         Begin             Set @Output = Char(13) + Char(10)                           + @Output             EXEC tSQLt.Fail@Output         End   END;   Once this script is entered, hit execute to add the Stored Procedure to the database. Before committing the test to source control,  it’s worth just checking that it works! For a positive test, click on “SQL Test” from the Tools menu, then click Run Tests. You should see output like the following: - a green tick to indicate success! But of course, what we also need to do is test that this is actually doing something by showing a failed test. Edit one of the email addresses in your table to an incorrect format: Now, re-run the same SQL Test as before and you’ll see the following: Great – we now know that our test is really doing something! You’ll also see a useful error message at the bottom of SSMS: (leave the email address as invalid for now, for the next steps). The next stage is to check this new test in to source control again, by right-clicking on the database and checking in the changes with a commit message (and not forgetting to sync in the GitHub client):   Checking that the Tests are Running as Integration Tests After the changes above are made, and after a build has run on Bamboo (manual or automatic), looking at the Stored Procedures for the RedGateAppCI, the SPROC for the new test has been moved over to the database. However this is not exactly what we were after. We didn’t want to just copy objects from one database to another, but actually run the tests as part of the build/integration test process. I.e. we’re continuously checking any changes we make (in this case, to the reference data emails), to ensure we’re not breaking a test that we’ve set up. The behaviour we want to see is that, if we check in static data that is incorrect (as we did in step 9 above) and we have the tSQLt test set up, then our build in Bamboo should fail. However, re-running the build shows the following: - sadly, a successful build! To make sure the tSQLt tests are run as part of the integration test, we need to amend a switch in the Red Gate CI config file. First, navigate to file sqlCI.targets in your working folder: Edit this document, make the following change, save the document, then commit and sync this change in the GitHub client: <!-- tSQLt tests --> <!-- Optional --> <!-- To run tSQLt tests in source control for the database, enter true. --> <enableTsqlt>true</enableTsqlt> Now, if we re-run the build in Bamboo (NB: I’ve moved to a new server here, hence different address and build number): - superb, a broken build!! The error message isn’t great here, so to get more detailed info, click on the full build log link on this page (below the fold). The interesting part of the log shown is towards the bottom. Pulling out this part:   21-Jun-2013 11:35:19 Build FAILED. 21-Jun-2013 11:35:19 21-Jun-2013 11:35:19 "C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj" (default target) (1) -> 21-Jun-2013 11:35:19 (sqlCI target) -> 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: RedGate.Deploy.SqlServerDbPackage.Shared.Exceptions.InvalidSqlException: Test Case Summary: 1 test case(s) executed, 0 succeeded, 1 failed, 0 errored. [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [MyChecks].[test Check Email Addresses] failed: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: ringo.starr@beatles [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: +----------------------+ [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj] 21-Jun-2013 11:35:19 EXEC : sqlCI error occurred: |Test Execution Summary| [C:\Users\Administrator\bamboo-home\xml-data\build-dir\RGA-RGP-JOB1\sqlCI.proj]   As a final check, we should make sure that, if we now fix this error, the build succeeds. So in SSMS, I’m going to correct the invalid email address, then check this change in to SQL Source Control (with a comment), commit to GitHub, and re-run the build:   This should have fixed the build: It worked! Summary This has been a very quick run through the implementation of CI for databases, including tSQLt tests to test whether your database updates are working. The next post in this series will focus on automated deployment – we’ve tested our database changes, how can we now deploy these to target sites?  

    Read the article

  • How to reload Ext.tree.TreePanel on demand?

    - by Sergei Stolyarov
    I want to create Ext.tree.TreePanel component and periodically load content from the external URl. So I've written something like new Ext.tree.TreePanel({ root: { nodeType: 'async', text: 'asdasd', draggable: false, id: 'folders-tree-root' }, loader: new Ext.tree.TreeLoader() }); And now I want to reload this tree, so I write: tree.loader.dataUrl = 'folders-sample.json'; tree.root.reload(); And nothing happens. add: The only way I've found is set some invalid value to dataUrl param on TreeLoader creation: new Ext.tree.TreePanel({ root: { nodeType: 'async', text: 'asdasd', draggable: false, id: 'folders-tree-root' }, loader: new Ext.tree.TreeLoader(dataUrl: 'something') });

    Read the article

  • Roll up project-level tasks to the project collection portal in TFS2010

    - by adam.mokan
    I have a Project Collection setup in my TFS2010RC deployment. I have two Projects setup under this collection with their own task lists, which are populated with data. I fully expected the tasks from these individual projects to "roll up" and appear in the task list at the Project Collection level, but they do not. The Project Collection task list is empty. Basically, I'm looking to provide a view so a supervisor could see all tasks across projects quickly and easily. I'm sure I could write a reporting services report, but it seems like this is something so basic that it would have been included and it just need to be turned on or something. I'm sure I'm probably missing something really simple here. Thanks.

    Read the article

  • Should I suppress CA1062: Validate arguments of public methods?

    - by brickner
    I've recently upgraded my project to Visual Studio 2010 from Visual Studio 2008. In Visual Studio 2008, this Code Analysis rule doesn't exist. Now I'm not sure if I should use this rule or not. I'm building an open source library so it seems important to keep people safe from doing mistakes. However, if all I'm going to do is throw ArgumentNullException when the parameter is null, it seems like writing useless code since ArgumentNullException will be thrown even if I won't write that code. Should I remove that rule or fix the violations?

    Read the article

  • Cocoa structs and NSMutableArray

    - by Circle
    I have an NSMutableArray that I'm trying to store and access some structs. How do I do this? 'addObject' gives me an error saying "Incompatible type for argument 1 of addObject". Here is an example ('in' is a NSFileHandle, 'array' is the NSMutableArray): //Write points for(int i=0; i<5; i++) { struct Point p; buff = [in readDataOfLength:1]; [buff getBytes:&(p.x) length:sizeof(p.x)]; [array addObject:p]; } //Read points for(int i=0; i<5; i++) { struct Point p = [array objectAtIndex:i]; NSLog(@"%i", p.x); }

    Read the article

  • Complex reporting on subversion (possibly Export Subversion log into database for reporting)

    - by James A. N. Stauffer
    What is the best way to do complex reporting on subversion logs like the following for each file? file, directory, last revision date, previous revision date(where revision date is at least 30 older than last), days diff(between revision dates) Since Subversion allows on revision to change multiple files I assume svn log needs to be run against each file individually. Ideas (that don't seem very good): Shell scripting to produce a csv file to be imported to a DB. The following is a start but doesn't show the filename: find . -name "." -print | xargs -l svn log -l 2 Shell scripting to produce XML and then use XSLT to create CSV to import to a DB. It might use a similar command to above but would still have some of the same limitation. Write a program to just parse the log on the whole directory tree, make one insert to DB per revision/file combination, and then query the DB.

    Read the article

  • HttpWebResponse with MJPEG and multipart/x-mixed-replace; boundary=--myboundary response content typ

    - by arri.me
    I have an ASP.NET application that I need to show a video feed from a security camera. The video feed has a content type of 'multipart/x-mixed-replace; boundary=--myboundary' with the image data between the boundaries. I need assistance with passing that stream of data through to my page so that the client side plugin I have can consume the stream just as it would if I browsed to the camera's web interface directly. The following code does not work: //Get response data byte[] data = HtmlParser.GetByteArrayFromStream(response.GetResponseStream()); if (data != null) { HttpContext.Current.Response.OutputStream.Write(data, 0, data.Length); } return;

    Read the article

  • Scheme homework help

    - by John
    Okay so I have started a new language in class. We are learning Scheme and i'm not sure on how to do it. When I say learning a new language, I mean thrown a homework and told to figure it out. A couple of them have got me stumped. My first problem is: Write a Scheme function that returns true (the Boolean constant #t) ifthe parameter is a list containing n a's followed by n b's. and false otherwise. Here's what I have right now: (define aequalb (lamda (list) (let ((head (car list)) (tail(cdr list))) (if(= 'a head) ((let ((count (count + 1))) (let (( newTail (aequalb tail)))) #f (if(= 'b head) ((let ((count (count - 1))) (let (( newTail (aequalb tail)))) #f (if(null? tail) (if(= count 0) #t #f))))))))))) I know this is completely wrong, but I've been trying so please take it easy on me. Any help would be much appreciated.

    Read the article

  • How do I use local memory in OpenCL?

    - by splicer
    I've been playing with OpenCL recently, and I'm able to write simple kernels that use only global memory. Now I'd like to start using local memory, but I can't seem to figure out how to use get_local_size() and get_local_id() to compute one "chunk" of output at a time. For example, let's say I wanted to convert Apple's OpenCL Hello World example kernel to something the uses local memory. How would you do it? Here's the original kernel source: __kernel square( __global float *input, __global float *output, const unsigned int count) { int i = get_global_id(0); if (i < count) output[i] = input[i] * input[i]; } If this example can't easily be converted into something that shows how to make use of local memory, any other simple example will do. Thanks!

    Read the article

  • dasBlog

    - by Daniel Moth
    Some people like blogging on a site that is completely managed by someone else (e.g. http://wordpress.com/) and others, like me, prefer hosting their own blog at their own domain. In the latter case you need to decide what blog engine to install on your web space to power your blog. There are many free blog engines to choose from (e.g. the one from http://wordpress.org/). If, like me, you want to use a blog engine that is based on the .NET platform you have many choices including BlogEngine.NET, Subtext and the one I picked: dasBlog. In this post I'll describe the steps I took to get going with the open source dasBlog (home page, source page). A. Installing First I installed dasBlog on my local Windows 7 machine where I have IIS7 installed. To install dasBlog, I started by clicking the "Install" button on its web gallery page. After that I went through configuration, theming and adding content as described below. Once I was happy that everything was working correctly on the local machine, I set this up on a hosting service. I went for a Windows IIS7 shared hosting 3 month Economy plan from GoDaddy. The dasBlog site lists a bunch of other hosts. You can read the installation instructions for dasBlog, and with GoDaddy I just had to click one button since it is available as part of their quick-install apps. With GoDaddy I had a previewdns option that allowed me to play around and preview my site before going live. B. Configuring After it was installed (on local machine and/or hosting provider), I followed the obvious steps to create an admin user and logged in. This displays an admin navigation bar with the following options: 1. Navigator Links: I decided I was not going to use this feature. I manage links on the side of my blog manually elsewhere as part of the theme. So, I deleted every entry on this page and ignored it thereafter. 2. Blogroll: Ditto - same comment as for Navigator Links. 3. Content Filters: I did not delete (or add) these, but I did ensure both checkboxes are not checked. I.e. I am not using this feature now, but I may return to it in the future. 4. Activity: This is a read-only view of various statistics. So nothing to configure here, but useful to come back to for complementary statistics to whatever other statistical package you use (e.g. free stats as part of the hosting and I also use feedburner for syndication stats). 5. Cross-posting: I did not need that, so I turned it off via the Configuration Settings discussed next. 6. Configuration Settings: This is where the bulk of the configuration for the blog takes place and they are stored in a single XML file: Site.Config file. There are truly self-explanatory options to pick for Basic Settings, Services Settings and Services to Ping, Syndication Settings (this is where you link to your feedburner name if you have one) and Mail to Weblog Settings (I keep this turned off). There are also "Xml Storage System Settings" (I keep this turned off), "OpenId Settings" (I allow OpenID commenters), "Spammer Settings" (Enable captcha, never show email addresses) and "Comment settings" (Enable comments, don't allow on older posts, don't allow html). There are also Appearance Settings (I checked the "Use Post Title for Permalink", replaced spaces with hyphen and unchecked the "Use Unique Title"). Finally, there are also Notification Settings, but they are a bit of hit and miss in my case, in that I don’t always get the emails (still investigating this). C. Adding Content You can add content via the "Add Entry" link on the admin navigation bar or by configuring the "Mail to Weblog" settings and sending email or, do what I've started doing, use Live Writer (also the team has a blog). Another way to add content is programmatically if, for example, you are migrating content from another blog (and I'll cover that in separate post sharing the code). What you should know is that all blog content (posts and comments) live in XML files in a folder called "content" under your dasBlog installation. D. Theming There is a very good guide about themes for dasBlog, there is also a similar guide with screenshots (scroll down to "So how do I create a theme") and the dasBlog macro reference. When you install dasBlog, there are many themes available; each theme is in its own folder (representing the folder name) under the themes folder. You may have noticed that you can switch between these via the "Appearance Settings" described above (look for the combobox after the Default Theme label). I created my own theme by copy-pasting an existing theme folder, renaming it and then switching to it as the default. I then opened the folder in Visual Studio and hacked around the HTML in the 3 files (itemTemplate, homeTemplate and dayTemplate). These files have a blogtemplate file extension, which I temporarily renamed to HTML as I was editing them. There is no more advice I can offer here as this is a matter of taste and the aforementioned links is all I used. Personally, I had salvaged the CSS (and structure) from my previous blog and wanted to make this one match it as closely as possible - I think I have succeeded. E. If you run into any issue with dasBlog... ...use your favorite search engine to find answers. Many bloggers have been using this engine for a while and have documented issues and workarounds over time. One such example is ScottHa's dasBlog category; another example is therightstuff where I "borrowed" the idea/macro for the outlook-style on-page navigation. If you don't find what you want through searching, try posting a question to the forums. Comments about this post welcome at the original blog.

    Read the article

  • Load and Web Performance Testing using Visual Studio Ultimate 2010-Part 3

    - by Tarun Arora
    Welcome back once again, in Part 1 of Load and Web Performance Testing using Visual Studio 2010 I talked about why Performance Testing the application is important, the test tools available in Visual Studio Ultimate 2010 and various test rig topologies, in Part 2 of Load and Web Performance Testing using Visual Studio 2010 I discussed the details of web performance & load tests as well as why it’s important to follow a goal based pattern while performance testing your application. In part 3 I’ll be discussing Test Result Analysis, Test Result Drill through, Test Report Generation, Test Run Comparison, Asp.net Profiler and some closing thoughts. Test Results – I see some creepy worms! In Part 2 we put together a web performance test and a load test, lets run the test to see load test to see how the Web site responds to the load simulation. While the load test is running you will be able to see close to real time analysis in the Load Test Analyser window. You can use the Load Test Analyser to conduct load test analysis in three ways: Monitor a running load test - A condensed set of the performance counter data is maintained in memory. To prevent the results memory requirements from growing unbounded, up to 200 samples for each performance counter are maintained. This includes 100 evenly spaced samples that span the current elapsed time of the run and the most recent 100 samples.         After the load test run is completed - The test controller spools all collected performance counter data to a database while the test is running. Additional data, such as timing details and error details, is loaded into the database when the test completes. The performance data for a completed test is loaded from the database and analysed by the Load Test Analyser. Below you can see a screen shot of the summary view, this provides key results in a format that is compact and easy to read. You can also print the load test summary, this is generated after the test has completed or been stopped.         Analyse the load test results of a previously run load test – We’ll see this in the section where i discuss comparison between two test runs. The performance counters can be plotted on the graphs. You also have the option to highlight a selected part of the test and view details, drill down to the user activity chart where you can hover over to see more details of the test run.   Generate Report => Test Run Comparisons The level of reports you can generate using the Load Test Analyser is astonishing. You have the option to create excel reports and conduct side by side analysis of two test results or to track trend analysis. The tools also allows you to export the graph data either to MS Excel or to a CSV file. You can view the ASP.NET profiler report to conduct further analysis as well. View Data and Diagnostic Attachments opens the Choose Diagnostic Data Adapter Attachment dialog box to select an adapter to analyse the result type. For example, you can select an IntelliTrace adapter, click OK and open the IntelliTrace summary for the test agent that was used in the load test.   Compare results This creates a set of reports that compares the data from two load test results using tables and bar charts. I have taken these screen shots from the MSDN documentation, I would highly recommend exploring the wealth of knowledge available on MSDN. Leaving Thoughts While load testing the application with an excessive load for a longer duration of time, i managed to bring the IIS to its knees by piling up a huge queue of requests waiting to be processed. This clearly means that the IIS had run out of threads as all the threads were busy processing existing request, one easy way of fixing this is by increasing the default number of allocated threads, but this might escalate the problem. The better suggestion is to try and drill down to the actual root cause of the problem. When ever the garbage collection runs it stops processing any pages so all requests that come in during that period are queued up, but realistically the garbage collection completes in fraction of a a second. To understand this better lets look at the .net heap, it is divided into large heap and small heap, anything greater than 85kB in size will be allocated to the Large object heap, the Large object heap is non compacting and remember large objects are expensive to move around, so if you are allocating something in the large object heap, make sure that you really need it! The small object heap on the other hand is divided into generations, so all objects that are supposed to be short-lived are suppose to live in Gen-0 and the long living objects eventually move to Gen-2 as garbage collection goes through.  As you can see in the picture below all < 85 KB size objects are first assigned to Gen-0, when Gen-0 fills up and a new object comes in and finds Gen-0 full, the garbage collection process is started, the process checks for all the dead objects and assigns them as the valid candidate for deletion to free up memory and promotes all the remaining objects in Gen-0 to Gen-1. So in the future when ever you clean up Gen-1 you have to clean up Gen-0 as well. When you fill up Gen – 0 again, all of Gen – 1 dead objects are drenched and rest are moved to Gen-2 and Gen-0 objects are moved to Gen-1 to free up Gen-0, but by this time your Garbage collection process has started to take much more time than it usually takes. Now as I mentioned earlier when garbage collection is being run all page requests that come in during that period are queued up. Does this explain why possibly page requests are getting queued up, apart from this it could also be the case that you are waiting for a long running database process to complete.      Lets explore the heap a bit more… What is really a case of crisis is when the objects are living long enough to make it to Gen-2 and then dying, this is definitely a high cost operation. But sometimes you need objects in memory, for example when you cache data you hold on to the objects because you need to use them right across the user session, which is acceptable. But if you wanted to see what extreme caching can do to your server then write a simple application that chucks in a lot of data in cache, run a load test over it for about 10-15 minutes, forcing a lot of data in memory causing the heap to run out of memory. If you get to such a state where you start running out of memory the IIS as a mode of recovery restarts the worker process. It is great way to free up all your memory in the heap but this would clear the cache. The problem with this is if the customer had 10 items in their shopping basket and that data was stored in the application cache, the user basket will now be empty forcing them either to get frustrated and go to a competitor website or if the customer is really patient, give it another try! How can you address this, well two ways of addressing this; 1. Workaround – A x86 bit processor only allows a maximum of 4GB of RAM, this means the machine effectively has around 3.4 GB of RAM available, the OS needs about 1.5 GB of RAM to run efficiently, the IIS and .net framework also need their share of memory, leaving you a heap of around 800 MB to play with. Because Team builds by default build your application in ‘Compile as any mode’ it means the application is build such that it will run in x86 bit mode if run on a x86 bit processor and run in a x64 bit mode if run on a x64 but processor. The problem with this is not all applications are really x64 bit compatible specially if you are using com objects or external libraries. So, as a quick win if you compiled your application in x86 bit mode by changing the compile as any selection to compile as x86 in the team build, you will be able to run your application on a x64 bit machine in x86 bit mode (WOW – By running Windows on Windows) and what that means is, you could use 8GB+ worth of RAM, if you take away everything else your application will roughly get a heap size of at least 4 GB to play with, which is immense. If you need a heap size of more than 4 GB you have either build a software for NASA or there is something fundamentally wrong in your application. 2. Solution – Now that you have put a workaround in place the IIS will not restart the worker process that regularly, which means you can take a breather and start working to get to the root cause of this memory leak. But this begs a question “How do I Identify possible memory leaks in my application?” Well i won’t say that there is one single tool that can tell you where the memory leak is, but trust me, ‘Performance Profiling’ is a great start point, it definitely gets you started in the right direction, let’s have a look at how. Performance Wizard - Start the Performance Wizard and select Instrumentation, this lets you measure function call counts and timings. Before running the performance session right click the performance session settings and chose properties from the context menu to bring up the Performance session properties page and as shown in the screen shot below, check the check boxes in the group ‘.NET memory profiling collection’ namely ‘Collect .NET object allocation information’ and ‘Also collect the .NET Object lifetime information’.    Now if you fire off the profiling session on your pages you will notice that the results allows you to view ‘Object Lifetime’ which shows you the number of objects that made it to Gen-0, Gen-1, Gen-2, Large heap, etc. Another great feature about the profile is that if your application has > 5% cases where objects die right after making to the Gen-2 storage a threshold alert is generated to alert you. Since you have the option to also view the most expensive methods and by capturing the IntelliTrace data you can drill in to narrow down to the line of code that is the root cause of the problem. Well now that we have seen how crucial memory management is and how easy Visual Studio Ultimate 2010 makes it for us to identify and reproduce the problem with the best of breed tools in the product. Caching One of the main ways to improve performance is Caching. Which basically means you tell the web server that instead of going to the database for each request you keep the data in the webserver and when the user asks for it you serve it from the webserver itself. BUT that can have consequences! Let’s look at some code, trust me caching code is not very intuitive, I define a cache key for almost all searches made through the common search page and cache the results. The approach works fine, first time i get the data from the database and second time data is served from the cache, significant performance improvement, EXCEPT when two users try to do the same operation and run into each other. But it is easy to handle this by adding the lock as you can see in the snippet below. So, as long as a user comes in and finds that the cache is empty, the user locks and starts to get the cache no more concurrency issues. But lets say you are processing 10 requests per second, by the time i have locked the operation to get the results from the database, 9 other users came in and found that the cache key is null so after i have come out and populated the cache they will still go in to get the results again. The application will still be faster because the next set of 10 users and so on would continue to get data from the cache. BUT if we added another null check after locking to build the cache and before actual call to the db then the 9 users who follow me would not make the extra trip to the database at all and that would really increase the performance, but didn’t i say that the code won’t be very intuitive, may be you should leave a comment you don’t want another developer to come in and think what a fresher why is he checking for the cache key null twice !!! The downside of caching is, you are storing the data outside of the database and the data could be wrong because the updates applied to the database would make the data cached at the web server out of sync. So, how do you invalidate the cache? Well if you only had one way of updating the data lets say only one entry point to the data update you can write some logic to say that every time new data is entered set the cache object to null. But this approach will not work as soon as you have several ways of feeding data to the system or your system is scaled out across a farm of web servers. The perfect solution to this is Micro Caching which means you cache the query for a set time duration and invalidate the cache after that set duration. The advantage is every time the user queries for that data with in the time span for which you have cached the results there are no calls made to the database and the data is served right from the server which makes the response immensely quick. Now figuring out the appropriate time span for which you micro cache the query results really depends on the application. Lets say your website gets 10 requests per second, if you retain the cache results for even 1 minute you will have immense performance gains. You would reduce 90% hits to the database for searching. Ever wondered why when you go to e-bookers.com or xpedia.com or yatra.com to book a flight and you click on the book button because the fare seems too exciting and you get an error message telling you that the fare is not valid any more. Yes, exactly => That is a cache failure! These travel sites or price compare engines are not going to hit the database every time you hit the compare button instead the results will be served from the cache, because the query results are micro cached, its a perfect trade-off, by micro caching the results the site gains 100% performance benefits but every once in a while annoys a customer because the fare has expired. But the trade off works in the favour of these sites as they are still able to process up to 30+ page requests per second which means cater to the site traffic by may be losing 1 customer every once in a while to a competitor who is also using a similar caching technique what are the odds that the user will not come back to their site sooner or later? Recap   Resources Below are some Key resource you might like to review. I would highly recommend the documentation, walkthroughs and videos available on MSDN. You can always make use of Fiddler to debug Web Performance Tests. Some community test extensions and plug ins available on Codeplex might also be of interest to you. The Road Ahead Thank you for taking the time out and reading this blog post, you may also want to read Part I and Part II if you haven’t so far. If you enjoyed the post, remember to subscribe to http://feeds.feedburner.com/TarunArora. Questions/Feedback/Suggestions, etc please leave a comment. Next ‘Load Testing in the cloud’, I’ll be working on exploring the possibilities of running Test controller/Agents in the Cloud. See you on the other side! Thank You!   Share this post : CodeProject

    Read the article

  • Serving east/west coasts with Geoipdns and MaxMind GeoLite data

    - by netvope
    I want to serve east (west) coast visitors with my Virginia (California) server. To do so, I plan to use Geoipdns and the IP-to-location mappings from MaxMind. MaxMind provide two datasets for free: GeoLite Country and GeoLite City. However, neither of them has east/west coast regions defined. A possible solution is to write a script to combine all the IP ranges for the east/west coast cities in GeoLite City, but that sounds a little bit stupid. What is the best practice in doing this? Any suggestions or alternatives?

    Read the article

  • iText PdfPTables, document.add with writeSelectedRows

    - by J2SE31
    I am currently modifying an existing system that generates reports with iText and java. The report template is as follows: Header1(PdfPTable) Header2(PdfPTable) Body(PdfPTable) I am currently using the writeSelectedRows to display Header1 and Header2, but document.add is used to display the Body. The problem is that the system is setup to write the headers AFTER the body has already been displayed on the screen, so I am displaying my headers on top of my body content. My question is how do I add my body table (using document.add) and have it display about halfway down the page (or any predetermined vertical position)? This way I would have sufficient room to display my headers above the body table. Note: I believe the body table is using document.add to facilitate automatic paging if the body content is too large.

    Read the article

  • linq to xml query returning a list of all child elements

    - by Xience
    I have got an xml document which looks something like this. <Root> <Info>....</Info> <Info>....</Info> <response>....</response> <warning>....</warning> <Info>....</Info> </Root> How can i write a linqToXML query so that it returns me an IEnumerable containing each child element, in this case all five child elements of , so that i could iterate over them. The order of child elements is not definite, neither is number of times the may appear. Thanks in advance

    Read the article

  • 'System.Windows.Forms.AxHost+InvalidActiveXStateException' in WPF

    - by Gagan
    I am trying to make a WPF Application and using the Visio Controls. I have a dll named AxInterop.Microsoft.Office.Interop.VisOcx and I refer it to instantiate an object of the type AxDrawingControl(). Now as soon as I write this code: AxDrawingControl objDrawControl=new AxDrawingControl(); and check the attributes of this object objDrawControl in the debug mode, I see this message: Exception of type 'System.Windows.Forms.AxHost+InvalidActiveXStateException' was thrown. Here is how it looks like: Please help me to get this thing resolved! Please!

    Read the article

< Previous Page | 1036 1037 1038 1039 1040 1041 1042 1043 1044 1045 1046 1047  | Next Page >