Search Results

Search found 43847 results on 1754 pages for 'command line arguments'.

Page 326/1754 | < Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >

  • compare two text files using linq?

    - by bala3569
    I have 4 text files in one folder and a pattern.txt to compare these text files..In pattern.txt i have insert update delete drop I need to compare this text file with those four text files and if these patterns matches any line in that text files i have to write those lines in another log file...i had read those files using linq..i need to compare those files and write in a text file with line number..here is my code var foldercontent = Directory.GetFiles(pathA) .Select(filename => File.ReadAllText(filename)) .Aggregate(new StringBuilder(), (sb, s) => sb.Append(s).Append(Environment.NewLine), sb => sb.ToString()); var pattern = File.ReadAllLines(pathB).Aggregate(new StringBuilder(), (sb, s) => sb.Append(s).Append(Environment.NewLine), sb => sb.ToString()); using (var dest = File.AppendText(Path.Combine(_logFolderPath, "log.txt"))) { //dest.WriteLine("LineNo : " + counter.ToString() + " : " + "" + line); } EDIT I have already used c# to compare two text files but i need this in linq while ((line = file.ReadLine()) != null) { if (line.IndexOf(line2, StringComparison.CurrentCultureIgnoreCase) != -1) { dest.WriteLine("LineNo : " + counter.ToString() + " : " + " " + line.TrimStart()); } counter++; } file.BaseStream.Seek(0, SeekOrigin.Begin); counter = 1;

    Read the article

  • How to use linux csplit to chop up massive XML file?

    - by Fred
    Hi everyone, I have a gigantic (4GB) XML file that I am currently breaking into chunks with linux "split" function (every 25,000 lines - not by bytes). This usually works great (I end up with about 50 files), except some of the data descriptions have line breaks, and so frequently the chunk files do not have the proper closing tags - and my parser chokes halfway through processing. Example file: (note: normally each "listing" xml node is supposed to be on its own line) <?xml version="1.0" encoding="UTF-8"?> <listings> <listing><date>2009-09-22</date><desc>This is a description WITHOUT line breaks and works fine with split</desc><more_tags>stuff</more_tags></listing> <listing><date>2009-09-22</date><desc>This is a really annoying description field WITH line breaks that screw the split function</desc><more_tags>stuff</more_tags></listing> </listings> Then sometimes my split ends up like <?xml version="1.0" encoding="UTF-8"?> <listings> <listing><date>2009-09-22</date><desc>This is a description WITHOUT line breaks and works fine with split</desc><more_tags>stuff</more_tags></listing> <listing><date>2009-09-22</date><desc>This is a really annoying description field WITH line breaks ... EOF So - I have been reading about "csplit" and it sounds like it might work to solve this issue. I cant seem to get the regular expression right... Basically I want the same output of ~50ish files Something like: *csplit -k myfile.xml '/</listing>/' 25000 {50} Any help would be great Thanks!

    Read the article

  • Java reflection appropriateness

    - by jsn
    This may be a fairly subjective question, but maybe not. My application contains a bunch of forms that are displayed to the user at different times. Each form is a class of its own. Typically the user clicks a button, which launches a new form. I have a convenience function that builds these buttons, you call it like this: buildButton( "button text", new SelectionAdapter() { @Override public void widgetSelected( SelectionEvent e ) { showForm( new TasksForm( args... ) ); } } ); I do this dozens of times, and it's really cumbersome having to make a SelectionAdapter every time. Really all I need for the button to know is what class to instantiate when it's clicked and what arguments to give the constructor, so I built a function that I call like this instead: buildButton( "button text", TasksForm.class, args... ); Where args is an arbitrary list of objects that you could use to instantiate TasksForm normally. It uses reflection to get a constructor from the class, match the argument list, and build an instance when it needs to. Most of the time I don't have to pass any arguments to the constructor at all. The downside is obviously that if I'm passing a bad set of arguments, it can't detect that at compilation time, so if it fails, a dialog is displayed at runtime. But it won't normally fail, and it'll be easy to debug if it does. I think this is much cleaner because I come from languages where the use of function and class literals is pretty common. But if you're a normal Java programmer, would seeing this freak you out, or would you appreciate not having to scan a zillion SelectionAdapters?

    Read the article

  • C# comparing two files regex problem.

    - by Mike
    Hi everyone, what I'm trying to do is open a huge list of files (about 40k records, and match them on a line in a file that contains 2 millions records. And if my line from file A matches a line in file B write out that line. File A contains a bunch of files without extensions and file B contains full file paths including extensions. i'm using this but i cant get it to go... string alphaFilePath = (@"C:\Documents and Settings\g\Desktop\Arrp\Find\natst_ready.txt"); List<string> alphaFileContent = new List<string>(); using (FileStream fs = new FileStream(alphaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { alphaFileContent.Add(rdr.ReadLine()); } } string betaFilePath = @"C:\Documents and Settings\g\Desktop\Arryup\Find\eble.txt"; StringBuilder sb = new StringBuilder(); using (FileStream fs = new FileStream(betaFilePath, FileMode.Open)) using (StreamReader rdr = new StreamReader(fs)) { while (!rdr.EndOfStream) { string betaFileLine = rdr.ReadLine(); string matchup = Regex.Match(alphaFileContent, @"(\\)(\\)(\\)(\\)(\\)(\\)(\\)(\\)(.*)(\.)").Groups[9].Value; if (alphaFileContent.Equals(matchup)) { File.AppendAllText(@"C:\array_tech.txt", betaFileLine); } } } This doesnt work because the alphafilecontent is a single line only and i'm having a hard time figuring out how to get my regex to work on the file that contains all the file paths (Betafilepath) here is a sample of the beta file path. C:\arres_i\Grn\Ora\SEC\DBZ_EX1\Nes\001\DZO-EX00001.txt Here is the line i'm trying to compare from my alpha DZO-EX00001

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • C++ design question, container of instances and pointers

    - by Tom
    Hi all, Im wondering something. I have class Polygon, which composes a vector of Line (another class here) class Polygon { std::vector<Line> lines; public: const_iterator begin() const; const_iterator end() const; } On the other hand, I have a function, that calculates a vector of pointers to lines, and based on those lines, should return a pointer to a Polygon. Polygon* foo(Polygon& p){ std::vector<Line> lines = bar (p.begin(),p.end()); return new Polygon(lines); } Here's the question: I can always add a Polygon (vector Is there a better way that dereferencing each element of the vector and assigning it to the existing vector container? //for line in vector<Line*> v //vcopy is an instance of vector<Line> vcopy.push_back(*(v.at(i)) I think not, but I dont really like that approach. Hopefully, I will be able to convince the author of the class to change it, but I cant base my coding right now to that fact (and i'm scared of a performance hit). Thanks in advance.

    Read the article

  • How can I theme custom form(drupal 6.x)

    - by Andrew
    DRUPAL 6.X I have this custom form constructor inside my custom module which is invoke through ajax request. I’m attempting to theme this form with the template file reside in my theme directory. For that matter, I’ve registered my theme inside template.php file which reside in my theme folder. Here’s how this file looks – function my_theme() { return array( 'searchdb' => array( 'arguments' => array('form' => NULL), 'template' => 'searchform', ) ); } And the following is the excerpt of module code – function test_menu() { $my_form['searchdb'] = array( 'title' => 'Search db', 'page callback' => 'get_form', 'page arguments' => array(0), 'access arguments' => array('access content'), 'type' => MENU_CALLBACK, ); return $my_form; } function get_form($formtype){ switch($formtype){ case 'searchdb' : echo drupal_get_form('searchdb'); break; } } function searchdb(){ $form['customer_name'] = array( '#type' => 'textfield', '#title' => t('Customer Name'), '#size' => 50, '#attributes' => array('class' => 'name-textbox'), ); return $form; } As you can imagine, this is not working at all. Just to test if my theme is even registered, I’ve also tested with theme function but, it’s not called. I've checked template file name and form-id(through the outputted html source) and everything seems ok. I would be glad if anyone could point me to the right direction.

    Read the article

  • Jquery + Prototype Question

    - by mikeyhill
    I recently inherited a site which is botched in all sorts of ways. I'm more of a php guy and initially the js was working just fine. I made no changes to the javascript or the any of the include files but after making a few content edits I'm getting errors from firebug. a.dispatchEvent is not a function emptyFunction()protot...ects.js (line 2) emptyFunction()protot...ects.js (line 2) fireContentLoadedEvent()protot...ects.js (line 2) [Break on this error] var Prototype={Version:'1.6.0.2',Brows...pe,Enumerable);Element.addMethods(); protot...ects.js (line 2) this.m_eTarget.setStyle is not a function [Break on this error] this.m_eTarget.setStyle( { position: 'relative', overflow:'hidden'} ); protot...ects.js (line 43) uncaught exception: [Exception... "Component returned failure code: 0x80070057 (NS_ERROR_ILLEGAL_VALUE)" nsresult: "0x80070057 (NS_ERROR_ILLEGAL_VALUE)" location: "JS frame :: js/prototype_effects.js :: anonymous :: line 2" data: no] Googling around I found several posts that sometimes jquery+prototype don't play well and rearranging the scripts could fix this issue, however being that I didn't touch these sections I'm not sure where I even need to begin to debug. The previous developer incorporated a head.inc file which loads up prototype, scriptaculous and then many of the pages are in a sub-template loading up jquery for functions like lightbox. The site is temp housed at http://dawn.mikeyhill.com Any help is appreciated.

    Read the article

  • how to solve a weired swig python c++ interfacing type error

    - by user2981648
    I want to use swig to switch a simple cpp function to python and use "scipy.integrate.quadrature" function to calculate the integration. But python 2.7 reports a type error. Do you guys know what is going on here? Thanks a lot. Furthermore, "scipy.integrate.quad" runs smoothly. So is there something special for "scipy.integrate.quadrature" function? The code is in the following: File "testfunctions.h": #ifndef TESTFUNCTIONS_H #define TESTFUNCTIONS_H double test_square(double x); #endif File "testfunctions.cpp": #include "testfunctions.h" double test_square(double x) { return x * x; } File "swig_test.i" : /* File : swig_test.i */ %module swig_test %{ #include "testfunctions.h" %} /* Let's just grab the original header file here */ %include "testfunctions.h" File "test.py": import scipy.integrate import _swig_test print scipy.integrate.quadrature(_swig_test.test_square, 0., 1.) error info: UMD has deleted: _swig_test Traceback (most recent call last): File "<stdin>", line 1, in <module> File "C:\Python27\lib\site-packages\spyderlib\widgets\externalshell\sitecustomize.py", line 523, in runfile execfile(filename, namespace) File "D:\data\haitaliu\Desktop\Projects\swig_test\Release\test.py", line 4, in <module> print scipy.integrate.quadrature(_swig_test.test_square, 0., 1.) File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 161, in quadrature newval = fixed_quad(vfunc, a, b, (), n)[0] File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 61, in fixed_quad return (b-a)/2.0*sum(w*func(y,*args),0), None File "C:\Python27\lib\site-packages\scipy\integrate\quadrature.py", line 90, in vfunc return func(x, *args) TypeError: in method 'test_square', argument 1 of type 'double'

    Read the article

  • Why doesn't `stdin.read()` read entire buffer?

    - by Shookie
    I've got the following code: def get_input(self): """ Reads command from stdin, returns its JSON form """ json_string = sys.stdin.read() print("json string is: "+json_string) json_data =json.loads(json_string) return json_data It reads a json string that was sent to it from another process. The json is read from stdin. For some reason I get the following output: json string is: <Some json here> json string is: Traceback (most recent call last): File "/Users/Matan/Documents/workspace/ProjectSH/addonmanager/addon_manager.py", line 63, in <module> manager.accept_commands() File "/Users/Matan/Documents/workspace/ProjectSH/addonmanager/addon_manager.py", line 49, in accept_commands json_data = self.get_input() File "/Users/Matan/Documents/workspace/ProjectSH/addonmanager/addon_manager.py", line 42, in get_input json_data =json.loads(json_string) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/__init__.py", line 338, in loads return _default_decoder.decode(s) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 365, in decode obj, end = self.raw_decode(s, idx=_w(s, 0).end()) File "/System/Library/Frameworks/Python.framework/Versions/2.7/lib/python2.7/json/decoder.py", line 383, in raw_decode raise ValueError("No JSON object could be decoded") So for some reason it reads an empty string from stdin instead of reading only the json. I've checked, and the code that writes to this process's stdin writes to it only once. What's wrong here?

    Read the article

  • Sorting a very large text file in Java

    - by Alice
    Hi, I have a large text file I need to sort in Java. The format is: word [tab] frequency [new line] The algorithm for sorting is: Read some of the file, filtering for purlely alphabetic words. Once you have X number of alphabetic words, call Collections.sort and write the result to a file. Repeat until you have finished reading the file. Start reading two sorted files, comparing line by line for the word with higher frequency, and writing at the same time to a new file as to not load much into your memory Repeat until all files are merged into one large file Right now I've divided the large file into smaller ones (sorted by descending frequency) with 10,000 lines each. I know I need to somehow merge these files back together, but I'm not sure how to go about this. I've created a LinkedList to keep track of all the files created. The algorithm says to compare each line in the two files, except I've tried a case where , say file1 = 8,6,5,3,1 and file2 = 9,8,8,8,8. Then if I compare them line by line I would get file3 = 9,8,8,6,8,5,8,3,8,1 which is incorrectly sorted (they should be in decreasing order). I think I'm misunderstanding some part of the algorithm. If someone could point out what I should do instead, I'd greatly appreciate it. Thanks. edit: Yes this is an assignment. We aren't allowed to increase memory unfortunately :(

    Read the article

  • Vim: How do I tell where a function is defined? (

    - by sixtyfootersdude
    I just installed macvim yesterday and I installed vim latex today. One of the menu items is calling a broken fuction (TeX-Suite -> view). When I click on the menu-time it makes this call: :silent! call Tex_ViewLatex() Question: Where can I find that function? Is there some way to figure out where it is defined? Just for curiosity sake I removed the silent part and ran this: :call Tex_ViewLatex() Which produces: Error detected while processing function Tex_ViewLaTeX: line 34: E121: Undefined variable: s:viewer E116: Invalid arguments for function strlen(s:viewer) E15: Invalid expression: strlen(s:viewer) line 39: E121: Undefined variable: appOpt E15: Invalid expression: 'open '.appOpt.s:viewer.' $*.'.s:target line 79: E121: Undefined variable: execString E116: Invalid arguments for function substitute(execString, '\V$*', mainfname, 'g' ) E15: Invalid expression: substitute(execString, '\V$*', mainfname, 'g') line 80: E121: Undefined variable: execString E116: Invalid arguments for function Tex_Debug line 82: E121: Undefined variable: execString E15: Invalid expression: 'silent! !'.execString Press ENTER or type command to continue I suspect that if I could see the source function I could figure out what inputs are bad or what it is looking for. Thanks.

    Read the article

  • Must JsUnit Cases Reside Under the Same Directory as JsUnit?

    - by chernevik
    I have installed JsUnit and a test case as follows: /home/chernevik/Programming/JavaScript/jsunit /home/chernevik/Programming/JavaScript/jsunit/testRunner.html /home/chernevik/Programming/JavaScript/jsunit/myTests/lineTestAbs.html /home/chernevik/Programming/JavaScript/lineTestAbs.html When I open the test runner in a browser as a file, and test lineTestAbs.html from the jsunit/myTests directory, it passes. When I test the same file from the JavaScript directory, the test runner times out, asking if the file exists or is a test page. Questions: Am I doing something wrong here, or is this the expected behavior? Is it possible to put test cases in a different directory structure, and if so what is the proper path reference to to JsUnitCore.js? Would JsUnit behave differently if the files were retrieved from an HTTP server? <html> <head> <title>Test Page line(m, x, b)</title> <script language="JavaScript" src="/home/chernevik/Programming/JavaScript/jsunit/app/jsUnitCore.js"></script> <script language="JavaScript"> function line(m, x, b) { return m*x + b; } function testCalculationIsValid() { assertEquals("zero intercept", 10, line(5, 2, 0)); assertEquals("zero slope", 5, line(0, 2, 5)); assertEquals("at x = 10", 25, line(2, 10, 5)); } </script> </head> <body> This pages tests line(m, x, b). </body> </html>

    Read the article

  • what happens when you stop VS debugger?

    - by mare
    If I have a line like this ContentRepository.Update(existing); that goes into datastore repository to update some object and I have a try..catch block in this Update function like this: string file = XmlProvider.DataStorePhysicalPath + o.GetType().Name + Path.DirectorySeparatorChar + o.Slug + ".xml"; DataContractSerializer dcs = new DataContractSerializer(typeof (BaseContentObject)); using ( XmlDictionaryWriter myWriter = XmlDictionaryWriter.CreateTextWriter(new FileStream(file, FileMode.Truncate, FileAccess.Write), Encoding.UTF8)) { try { dcs.WriteObject(myWriter, o); myWriter.Close(); } catch (Exception) { // if anything goes wrong, delete the created file if (File.Exists(file)) File.Delete(file); if(myWriter.WriteState!=WriteState.Closed) myWriter.Close(); } } then why would Visual Studio go on with calling Update() if I click "Stop" in debugging session on the above line? For instance, I came to that line by going line by line pressing F10 and now I'm on that line which is colored yellow and I press Stop. Apparently what happens is, VS goes to execute the Update() method and somehow figures out something gone wrong and goes into "catch" and deletes the file, which is wrong, because I want my catch to work when there is a true exception not when I debug a program and force to stop it.

    Read the article

  • .Net file writing and string splitting issues

    - by sagar
    I have a requirement where the file should be split using a given character. Default splitting options are CRLF and LF In both these cases I am splitting the line by \r\n and \r respectively. Also I have requirement where any size of file should be processed. (Processing is basically inserting the given string in a file at given position). For this I am reading the file in chunk of 1024 bytes. Then I am applying the string.Split() method. Split() method gives options for ignoring white spaces and none. I have to add back these line break characters to the line. for this I am using a binary writer and I am writing the byte array to the new file. Issue:- 1) When line break is CRLF, and the split option is NONE, while spaces are also added in the splitted array. Second option is given (to ignore white spaces) CRLF works properly. 2)Bit ignoring white space option creates other problems, as I am reading the file byte by byte I can't ignore a white space. 3)When line break characters are other than default(e.g. '|', a null value is prepended to the resulting line. Can anybody give solution to my issues?

    Read the article

  • Ordering a set of lines so that they follow one from the other

    - by george
    Line# Lat. Lon. 1a 1573313.042320 6180142.720910 .. .. 1z 1569171.442602 6184932.867930 3a 1569171.764930 6184934.045650 .. .. 3z 1570412.815667 6190358.086690 5a 1570605.667770 6190253.392920 .. .. 5z 1570373.562963 6190464.146120 4a 1573503.842910 6189595.286870 .. .. 4z 1570690.065390 6190218.190575 Each pair of lines above (a..z) represents the first and last coordinate pair of a number of points which together define a line. The lines are not listed in sequence because I don't know what the correct sequence is just by looking at the coordinates (unless I look at the lines in a map). Hence my question: how can I find programmatically (in Python) what the correct sequence is, so I can join the lines into one long line, keeping in mind the following problem: - a 'z' point (the last point in a line) may well be the first point if the line is described as proceeding in the opposite direction to other lines. e.g. one line may go from left to right, another from right to left (or top to bottom and viceversa). Thank you in advance...

    Read the article

  • SQL SERVER – Finding Shortest Distance between Two Shapes using Spatial Data Classes – Ramsetu or Adam’s Bridge

    - by pinaldave
    Recently I was reading excellent blog post by Lenni Lobel on Spatial Database. He has written very interesting function ShortestLineTo in Spatial Data Classes. I really loved this new feature of the finding shortest distance between two shapes in SQL Server. Following is the example which is same as Lenni talk on his blog article . DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -26, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO When you run this script SQL Server finds out the shortest distance between two shapes and draws the line. We are using STBuffer so we can see the connecting line clearly. Now let us modify one of the object and then we see how the connecting shortest line works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -22, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) GO Now once again let us modify one of the script and see how the shortest line to works. DECLARE @Shape1 geometry = 'POLYGON ((-20 -30, -3 -30, 14 -28, 20 -40, -20 -30))' DECLARE @Shape2 geometry = 'POLYGON ((-18 -20, 0 -10, 4 -12, 10 -20, 2 -18, -18 -20))' SELECT @Shape1 UNION ALL SELECT @Shape2 UNION ALL SELECT @Shape1.ShortestLineTo(@Shape2).STBuffer(.25) SELECT @Shape1.STDistance(@Shape2) GO You can see as the objects are changing the shortest lines are moving at appropriate place. I think even though this is very small feature this is really cool know. While I was working on this example, I suddenly thought about distance between Sri Lanka and India. The distance is very short infect it is less than 30 km by sea. I decided to map India and Sri Lanka using spatial data classes. To my surprise the plotted shortest line is the same as Adam’s Bridge or Ramsetu. Adam’s Bridge starts as chain of shoals from the Dhanushkodi tip of India’s Pamban Island and ends at Sri Lanka’s Mannar Island. Geological evidence suggests that this bridge is a former land connection between India and Sri Lanka. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Function, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Spatial Database, SQL Spatial

    Read the article

  • PowerShell Script to Create PowerShell Profile

    - by Brian Jackett
    Utilizing a PowerShell profile can help any PowerShell user save time getting up and running with their work.  For those unfamiliar a PowerShell profile is a file you can store any PowerShell commands that you want to run when you fire up a PowerShell console (or ISE.)  In my typical profiles (example here) I load assemblies (like SharePoint 2007 DLL), set aliases, set environment variable values (such as max history), and perform other general customizations to make my work easier.  Below is a sample script that will check to see if a PowerShell profile (Console or ISE) exists and create it if not found.  The .ps1 script file version can also be downloaded from my SkyDrive here. Note: if downloading the .ps1 file, be sure you have enabled unsigned scripts to run on your machine as I have not signed mine.   $folderExists = test-path -path $Env:UserProfile\Documents\WindowsPowerShell if($folderExists -eq $false) { new-item -type directory -path $Env:UserProfile\Documents\WindowsPowerShell > $null echo "Containing folder for profile created at: $Env:UserProfile\Documents\WindowsPowerShell" }   $profileExists = test-path -path $profile if($profileExists -eq $false) { new-item -type file -path $profile > $null echo "Profile file created at: $profile" }     A few things to note while going through the above script. $Env:UserProfile represents the personal user folder (c:\documents and settings…. on older OSes like XP and c:\Users… on Win 7) so it adapts to whichever OS you are running but was tested against Windows 7 and Windows Server 2008 R2. “ > $null” sends the command to a null stream.  Essentially this is equivalent to DOS scripting of “@ECHO OFF” by suppressing echoing the command just run, but only for the specific command it is appended to.  I haven’t yet found a better way to accomplish command suppression, but this is definitely not required for the script to work. $profile represent a standard variable to the file path of the profile file.  It is dynamic based on whether you are running PowerShell Console or ISE.   Conclusion     In less than two weeks (Apr. 10th to be exact) I’ll be heading down to SharePoint Saturday Charlotte (SPSCLT) to give two presentations on using PowerShell with SharePoint.  Since I’ll be prepping a lot of material for PowerShell I thought it only appropriate to pass along this nice little script I recently created.  If you’ve never used a PowerShell profile this is a great chance to start using one.  If you’ve been using a profile before, perhaps you learned a trick or two to add to your toolbox.  For those of you in the Charlotte, NC area sign up for the SharePoint Saturday and see some great content and community with great folks.         -Frog Out

    Read the article

  • How to Specify AssemblyKeyFile Attribute in .NET Assembly and Issues

    How to specify strong key file in assembly? Answer: You can specify snk file information using following line [assembly: AssemblyKeyFile(@"c:\Key2.snk")] Where to specify an strong key file (snk file)? Answer: You have two options to specify the AssemblyKeyFile infromation. 1. In class 2. In AssemblyInfo.cs [assembly: AssemblyKeyFile(@"c:\Key2.snk")] 1. In Class you must specify above line before defining namespace of the class and after all the imports or usings Example: See Line 7 in bellow sample class using System;using System.Collections.Generic;using System.Linq;using System.Text;using System.Reflection;[assembly: AssemblyKeyFile(@"c:\Key1.snk")]namespace Csharp3Part1{ class Person { public string GetName() { return "Smith"; } }}2. In AssemblyInfo.cs You can aslo specify assembly information in AssemblyInfo.cs Example: See Line 16 in bellow sample AssemblyInfo.csusing System.Reflection;using System.Runtime.CompilerServices;using System.Runtime.InteropServices;// General Information about an assembly is controlled through the following// set of attributes. Change these attribute values to modify the information// associated with an assembly.[assembly: AssemblyTitle("Csharp3Part1")][assembly: AssemblyDescription("")][assembly: AssemblyConfiguration("")][assembly: AssemblyCompany("Deloitte")][assembly: AssemblyProduct("Csharp3Part1")][assembly: AssemblyCopyright("Copyright © Deloitte 2009")][assembly: AssemblyTrademark("")][assembly: AssemblyCulture("")][assembly: AssemblyKeyFile(@"c:\Key1.snk")]// Setting ComVisible to false makes the types in this assembly not visible// to COM components. If you need to access a type in this assembly from// COM, set the ComVisible attribute to true on that type.[assembly: ComVisible(false)]// The following GUID is for the ID of the typelib if this project is exposed to COM[assembly: Guid("4350396f-1a5c-4598-a79f-2e1f219654f3")]// Version information for an assembly consists of the following four values://// Major Version// Minor Version// Build Number// Revision//// You can specify all the values or you can default the Build and Revision Numbers// by using the '*' as shown below:// [assembly: AssemblyVersion("1.0.*")][assembly: AssemblyVersion("1.0.0.0")][assembly: AssemblyFileVersion("1.0.0.0")]Issues:You should not sepcify this in following ways. 1. In multiple classes. 2. In both class and AssemblyInfo.cs If you did wrong in either one of the above ways, Visual Studio or C#/VB.NET compilers shows following Error Duplicate 'AssemblyKeyFile' attribute and warning Use command line option '/keyfile' or appropriate project settings instead of 'AssemblyKeyFile' To avoid this, Please specity your keyfile information only one time either only in one class or in AssemblyInfo.cs file. It is suggested to specify this at AssemblyInfo.cs file You might also encounter the errors like Error: type or namespace name 'AssemblyKeyFileAttribute' and 'AssemblyKeyFile' could not be found. Solution. Please find herespan.fullpost {display:none;} span.fullpost {display:none;}

    Read the article

  • Node.js Adventure - Host Node.js on Windows Azure Worker Role

    - by Shaun
    In my previous post I demonstrated about how to develop and deploy a Node.js application on Windows Azure Web Site (a.k.a. WAWS). WAWS is a new feature in Windows Azure platform. Since it’s low-cost, and it provides IIS and IISNode components so that we can host our Node.js application though Git, FTP and WebMatrix without any configuration and component installation. But sometimes we need to use the Windows Azure Cloud Service (a.k.a. WACS) and host our Node.js on worker role. Below are some benefits of using worker role. - WAWS leverages IIS and IISNode to host Node.js application, which runs in x86 WOW mode. It reduces the performance comparing with x64 in some cases. - WACS worker role does not need IIS, hence there’s no restriction of IIS, such as 8000 concurrent requests limitation. - WACS provides more flexibility and controls to the developers. For example, we can RDP to the virtual machines of our worker role instances. - WACS provides the service configuration features which can be changed when the role is running. - WACS provides more scaling capability than WAWS. In WAWS we can have at most 3 reserved instances per web site while in WACS we can have up to 20 instances in a subscription. - Since when using WACS worker role we starts the node by ourselves in a process, we can control the input, output and error stream. We can also control the version of Node.js.   Run Node.js in Worker Role Node.js can be started by just having its execution file. This means in Windows Azure, we can have a worker role with the “node.exe” and the Node.js source files, then start it in Run method of the worker role entry class. Let’s create a new windows azure project in Visual Studio and add a new worker role. Since we need our worker role execute the “node.exe” with our application code we need to add the “node.exe” into our project. Right click on the worker role project and add an existing item. By default the Node.js will be installed in the “Program Files\nodejs” folder so we can navigate there and add the “node.exe”. Then we need to create the entry code of Node.js. In WAWS the entry file must be named “server.js”, which is because it’s hosted by IIS and IISNode and IISNode only accept “server.js”. But here as we control everything we can choose any files as the entry code. For example, I created a new JavaScript file named “index.js” in project root. Since we created a C# Windows Azure project we cannot create a JavaScript file from the context menu “Add new item”. We have to create a text file, and then rename it to JavaScript extension. After we added these two files we should set their “Copy to Output Directory” property to “Copy Always”, or “Copy if Newer”. Otherwise they will not be involved in the package when deployed. Let’s paste a very simple Node.js code in the “index.js” as below. As you can see I created a web server listening at port 12345. 1: var http = require("http"); 2: var port = 12345; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then we need to start “node.exe” with this file when our worker role was started. This can be done in its Run method. I found the Node.js and entry JavaScript file name, and then create a new process to run it. Our worker role will wait for the process to be exited. If everything is OK once our web server was opened the process will be there listening for incoming requests, and should not be terminated. The code in worker role would be like this. 1: public override void Run() 2: { 3: // This is a sample worker implementation. Replace with your logic. 4: Trace.WriteLine("NodejsHost entry point called", "Information"); 5:  6: // retrieve the node.exe and entry node.js source code file name. 7: var node = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot\node.exe"); 8: var js = "index.js"; 9:  10: // prepare the process starting of node.exe 11: var info = new ProcessStartInfo(node, js) 12: { 13: CreateNoWindow = false, 14: ErrorDialog = true, 15: WindowStyle = ProcessWindowStyle.Normal, 16: UseShellExecute = false, 17: WorkingDirectory = Environment.ExpandEnvironmentVariables(@"%RoleRoot%\approot") 18: }; 19: Trace.WriteLine(string.Format("{0} {1}", node, js), "Information"); 20:  21: // start the node.exe with entry code and wait for exit 22: var process = Process.Start(info); 23: process.WaitForExit(); 24: } Then we can run it locally. In the computer emulator UI the worker role started and it executed the Node.js, then Node.js windows appeared. Open the browser to verify the website hosted by our worker role. Next let’s deploy it to azure. But we need some additional steps. First, we need to create an input endpoint. By default there’s no endpoint defined in a worker role. So we will open the role property window in Visual Studio, create a new input TCP endpoint to the port we want our website to use. In this case I will use 80. Even though we created a web server we should add a TCP endpoint of the worker role, since Node.js always listen on TCP instead of HTTP. And then changed the “index.js”, let our web server listen on 80. 1: var http = require("http"); 2: var port = 80; 3:  4: http.createServer(function (req, res) { 5: res.writeHead(200, { "Content-Type": "text/plain" }); 6: res.end("Hello World\n"); 7: }).listen(port); 8:  9: console.log("Server running at port %d", port); Then publish it to Windows Azure. And then in browser we can see our Node.js website was running on WACS worker role. We may encounter an error if we tried to run our Node.js website on 80 port at local emulator. This is because the compute emulator registered 80 and map the 80 endpoint to 81. But our Node.js cannot detect this operation. So when it tried to listen on 80 it will failed since 80 have been used.   Use NPM Modules When we are using WAWS to host Node.js, we can simply install modules we need, and then just publish or upload all files to WAWS. But if we are using WACS worker role, we have to do some extra steps to make the modules work. Assuming that we plan to use “express” in our application. Firstly of all we should download and install this module through NPM command. But after the install finished, they are just in the disk but not included in the worker role project. If we deploy the worker role right now the module will not be packaged and uploaded to azure. Hence we need to add them to the project. On solution explorer window click the “Show all files” button, select the “node_modules” folder and in the context menu select “Include In Project”. But that not enough. We also need to make all files in this module to “Copy always” or “Copy if newer”, so that they can be uploaded to azure with the “node.exe” and “index.js”. This is painful step since there might be many files in a module. So I created a small tool which can update a C# project file, make its all items as “Copy always”. The code is very simple. 1: static void Main(string[] args) 2: { 3: if (args.Length < 1) 4: { 5: Console.WriteLine("Usage: copyallalways [project file]"); 6: return; 7: } 8:  9: var proj = args[0]; 10: File.Copy(proj, string.Format("{0}.bak", proj)); 11:  12: var xml = new XmlDocument(); 13: xml.Load(proj); 14: var nsManager = new XmlNamespaceManager(xml.NameTable); 15: nsManager.AddNamespace("pf", "http://schemas.microsoft.com/developer/msbuild/2003"); 16:  17: // add the output setting to copy always 18: var contentNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:Content", nsManager); 19: UpdateNodes(contentNodes, xml, nsManager); 20: var noneNodes = xml.SelectNodes("//pf:Project/pf:ItemGroup/pf:None", nsManager); 21: UpdateNodes(noneNodes, xml, nsManager); 22: xml.Save(proj); 23:  24: // remove the namespace attributes 25: var content = xml.InnerXml.Replace("<CopyToOutputDirectory xmlns=\"\">", "<CopyToOutputDirectory>"); 26: xml.LoadXml(content); 27: xml.Save(proj); 28: } 29:  30: static void UpdateNodes(XmlNodeList nodes, XmlDocument xml, XmlNamespaceManager nsManager) 31: { 32: foreach (XmlNode node in nodes) 33: { 34: var copyToOutputDirectoryNode = node.SelectSingleNode("pf:CopyToOutputDirectory", nsManager); 35: if (copyToOutputDirectoryNode == null) 36: { 37: var n = xml.CreateNode(XmlNodeType.Element, "CopyToOutputDirectory", null); 38: n.InnerText = "Always"; 39: node.AppendChild(n); 40: } 41: else 42: { 43: if (string.Compare(copyToOutputDirectoryNode.InnerText, "Always", true) != 0) 44: { 45: copyToOutputDirectoryNode.InnerText = "Always"; 46: } 47: } 48: } 49: } Please be careful when use this tool. I created only for demo so do not use it directly in a production environment. Unload the worker role project, execute this tool with the worker role project file name as the command line argument, it will set all items as “Copy always”. Then reload this worker role project. Now let’s change the “index.js” to use express. 1: var express = require("express"); 2: var app = express(); 3:  4: var port = 80; 5:  6: app.configure(function () { 7: }); 8:  9: app.get("/", function (req, res) { 10: res.send("Hello Node.js!"); 11: }); 12:  13: app.get("/User/:id", function (req, res) { 14: var id = req.params.id; 15: res.json({ 16: "id": id, 17: "name": "user " + id, 18: "company": "IGT" 19: }); 20: }); 21:  22: app.listen(port); Finally let’s publish it and have a look in browser.   Use Windows Azure SQL Database We can use Windows Azure SQL Database (a.k.a. WACD) from Node.js as well on worker role hosting. Since we can control the version of Node.js, here we can use x64 version of “node-sqlserver” now. This is better than if we host Node.js on WAWS since it only support x86. Just install the “node-sqlserver” module from NPM, copy the “sqlserver.node” from “Build\Release” folder to “Lib” folder. Include them in worker role project and run my tool to make them to “Copy always”. Finally update the “index.js” to use WASD. 1: var express = require("express"); 2: var sql = require("node-sqlserver"); 3:  4: var connectionString = "Driver={SQL Server Native Client 10.0};Server=tcp:{SERVER NAME}.database.windows.net,1433;Database={DATABASE NAME};Uid={LOGIN}@{SERVER NAME};Pwd={PASSWORD};Encrypt=yes;Connection Timeout=30;"; 5: var port = 80; 6:  7: var app = express(); 8:  9: app.configure(function () { 10: app.use(express.bodyParser()); 11: }); 12:  13: app.get("/", function (req, res) { 14: sql.open(connectionString, function (err, conn) { 15: if (err) { 16: console.log(err); 17: res.send(500, "Cannot open connection."); 18: } 19: else { 20: conn.queryRaw("SELECT * FROM [Resource]", function (err, results) { 21: if (err) { 22: console.log(err); 23: res.send(500, "Cannot retrieve records."); 24: } 25: else { 26: res.json(results); 27: } 28: }); 29: } 30: }); 31: }); 32:  33: app.get("/text/:key/:culture", function (req, res) { 34: sql.open(connectionString, function (err, conn) { 35: if (err) { 36: console.log(err); 37: res.send(500, "Cannot open connection."); 38: } 39: else { 40: var key = req.params.key; 41: var culture = req.params.culture; 42: var command = "SELECT * FROM [Resource] WHERE [Key] = '" + key + "' AND [Culture] = '" + culture + "'"; 43: conn.queryRaw(command, function (err, results) { 44: if (err) { 45: console.log(err); 46: res.send(500, "Cannot retrieve records."); 47: } 48: else { 49: res.json(results); 50: } 51: }); 52: } 53: }); 54: }); 55:  56: app.get("/sproc/:key/:culture", function (req, res) { 57: sql.open(connectionString, function (err, conn) { 58: if (err) { 59: console.log(err); 60: res.send(500, "Cannot open connection."); 61: } 62: else { 63: var key = req.params.key; 64: var culture = req.params.culture; 65: var command = "EXEC GetItem '" + key + "', '" + culture + "'"; 66: conn.queryRaw(command, function (err, results) { 67: if (err) { 68: console.log(err); 69: res.send(500, "Cannot retrieve records."); 70: } 71: else { 72: res.json(results); 73: } 74: }); 75: } 76: }); 77: }); 78:  79: app.post("/new", function (req, res) { 80: var key = req.body.key; 81: var culture = req.body.culture; 82: var val = req.body.val; 83:  84: sql.open(connectionString, function (err, conn) { 85: if (err) { 86: console.log(err); 87: res.send(500, "Cannot open connection."); 88: } 89: else { 90: var command = "INSERT INTO [Resource] VALUES ('" + key + "', '" + culture + "', N'" + val + "')"; 91: conn.queryRaw(command, function (err, results) { 92: if (err) { 93: console.log(err); 94: res.send(500, "Cannot retrieve records."); 95: } 96: else { 97: res.send(200, "Inserted Successful"); 98: } 99: }); 100: } 101: }); 102: }); 103:  104: app.listen(port); Publish to azure and now we can see our Node.js is working with WASD through x64 version “node-sqlserver”.   Summary In this post I demonstrated how to host our Node.js in Windows Azure Cloud Service worker role. By using worker role we can control the version of Node.js, as well as the entry code. And it’s possible to do some pre jobs before the Node.js application started. It also removed the IIS and IISNode limitation. I personally recommended to use worker role as our Node.js hosting. But there are some problem if you use the approach I mentioned here. The first one is, we need to set all JavaScript files and module files as “Copy always” or “Copy if newer” manually. The second one is, in this way we cannot retrieve the cloud service configuration information. For example, we defined the endpoint in worker role property but we also specified the listening port in Node.js hardcoded. It should be changed that our Node.js can retrieve the endpoint. But I can tell you it won’t be working here. In the next post I will describe another way to execute the “node.exe” and Node.js application, so that we can get the cloud service configuration in Node.js. I will also demonstrate how to use Windows Azure Storage from Node.js by using the Windows Azure Node.js SDK.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Hyperlinked, externalized source code documentation

    - by Dave Jarvis
    Why do we still embed natural language descriptions of source code (i.e., the reason why a line of code was written) within the source code, rather than as a separate document? Given the expansive real-estate afforded to modern development environments (high-resolution monitors, dual-monitors, etc.), an IDE could provide semi-lock-step panels wherein source code is visually separated from -- but intrinsically linked to -- its corresponding comments. For example, developers could write source code comments in a hyper-linked markup language (linking to additional software requirements), which would simultaneously prevent documentation from cluttering the source code. What shortcomings would inhibit such a software development mechanism? A mock-up to help clarify the question: When the cursor is at a particular line in the source code (shown with a blue background, above), the documentation that corresponds to the line at the cursor is highlighted (i.e., distinguished from the other details). As noted in the question, the documentation would stay in lock-step with the source code as the cursor jumps through the source code. A hot-key could switch between "documentation mode" and "development mode". Potential advantages include: More source code and more documentation on the screen(s) at once Ability to edit documentation independently of source code (regardless of language?) Write documentation and source code in parallel without merge conflicts Real-time hyperlinked documentation with superior text formatting Quasi-real-time machine translation into different natural languages Every line of code can be clearly linked to a task, business requirement, etc. Documentation could automatically timestamp when each line of code was written (metrics) Dynamic inclusion of architecture diagrams, images to explain relations, etc. Single-source documentation (e.g., tag code snippets for user manual inclusion). Note: The documentation window can be collapsed Workflow for viewing or comparing source files would not be affected How the implementation happens is a detail; the documentation could be: kept at the end of the source file; split into two files by convention (filename.c, filename.c.doc); or fully database-driven By hyperlinked documentation, I mean linking to external sources (such as StackOverflow or Wikipedia) and internal documents (i.e., a wiki on a subdomain that could cross-reference business requirements documentation) and other source files (similar to JavaDocs). Related thread: What's with the aversion to documentation in the industry?

    Read the article

  • ASP.NET Dynamic Data Deployment Error

    - by rajbk
    You have an ASP.NET 3.5 dynamic data website that works great on your local box. When you deploy it to your production machine and turn on debug, you get the YSD Server Error in '/MyPath/MyApp' Application. Parser Error Description: An error occurred during the parsing of a resource required to service this request. Please review the following specific parse error details and modify your source file appropriately. Parser Error Message: Unknown server tag 'asp:DynamicDataManager'. Source Error: Line 5:  Line 6:  <asp:Content ID="Content1" ContentPlaceHolderID="ContentPlaceHolder1" Runat="Server"> Line 7:      <asp:DynamicDataManager ID="DynamicDataManager1" runat="server" AutoLoadForeignKeys="true" /> Line 8:  Line 9:      <h2><%= table.DisplayName%></h2> Probable Causes The server does not have .NET 3.5 SP1, which includes ASP.NET Dynamic Data, installed. Download it here. The third tagPrefix shown below is missing from web.config <pages> <controls> <add tagPrefix="asp" namespace="System.Web.UI" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.UI.WebControls" assembly="System.Web.Extensions, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> <add tagPrefix="asp" namespace="System.Web.DynamicData" assembly="System.Web.DynamicData, Version=3.5.0.0, Culture=neutral, PublicKeyToken=31BF3856AD364E35"/> </controls></pages>     Hope that helps!

    Read the article

  • Speed-start your Linux App: Using DB2 and the DB2 Control Center

    This article guides you through setting up and using IBM DB2 7.2 with the Command Line Processor. You'll also learn to use the graphical Control Center, which helps you explore and control your databases, and the graphical Command Center, which helps you generate SQL queries. Other topics covered include Java runtime environment setup, useful Linux utility functions, and bash profile customization.

    Read the article

  • ANTS CLR and Memory Profiler In Depth Review (Part 1 of 2 &ndash; CLR Profiler)

    - by ToStringTheory
    One of the things that people might not know about me, is my obsession to make my code as efficient as possible.  Many people might not realize how much of a task or undertaking that this might be, but it is surely a task as monumental as climbing Mount Everest, except this time it is a challenge for the mind…  In trying to make code efficient, there are many different factors that play a part – size of project or solution, tiers, language used, experience and training of the programmer, technologies used, maintainability of the code – the list can go on for quite some time. I spend quite a bit of time when developing trying to determine what is the best way to implement a feature to accomplish the efficiency that I look to achieve.  One program that I have recently come to learn about – Red Gate ANTS Performance (CLR) and Memory profiler gives me tools to accomplish that job more efficiently as well.  In this review, I am going to cover some of the features of the ANTS profiler set by compiling some hideous example code to test against. Notice As a member of the Geeks With Blogs Influencers program, one of the perks is the ability to review products, in exchange for a free license to the program.  I have not let this affect my opinions of the product in any way, and Red Gate nor Geeks With Blogs has tried to influence my opinion regarding this product in any way. Introduction The ANTS Profiler pack provided by Red Gate was something that I had not heard of before receiving an email regarding an offer to review it for a license.  Since I look to make my code efficient, it was a no brainer for me to try it out!  One thing that I have to say took me by surprise is that upon downloading the program and installing it you fill out a form for your usual contact information.  Sure enough within 2 hours, I received an email from a sales representative at Red Gate asking if she could help me to achieve the most out of my trial time so it wouldn’t go to waste.  After replying to her and explaining that I was looking to review its feature set, she put me in contact with someone that setup a demo session to give me a quick rundown of its features via an online meeting.  After having dealt with a massive ordeal with one of my utility companies and their complete lack of customer service, Red Gates friendly and helpful representatives were a breath of fresh air, and something I was thankful for. ANTS CLR Profiler The ANTS CLR profiler is the thing I want to focus on the most in this post, so I am going to dive right in now. Install was simple and took no time at all.  It installed both the profiler for the CLR and Memory, but also visual studio extensions to facilitate the usage of the profilers (click any images for full size images): The Visual Studio menu options (under ANTS menu) Starting the CLR Performance Profiler from the start menu yields this window If you follow the instructions after launching the program from the start menu (Click File > New Profiling Session to start a new project), you are given a dialog with plenty of options for profiling: The New Session dialog.  Lots of options.  One thing I noticed is that the buttons in the lower right were half-covered by the panel of the application.  If I had to guess, I would imagine that this is caused by my DPI settings being set to 125%.  This is a problem I have seen in other applications as well that don’t scale well to different dpi scales. The profiler options give you the ability to profile: .NET Executable ASP.NET web application (hosted in IIS) ASP.NET web application (hosted in IIS express) ASP.NET web application (hosted in Cassini Web Development Server) SharePoint web application (hosted in IIS) Silverlight 4+ application Windows Service COM+ server XBAP (local XAML browser application) Attach to an already running .NET 4 process Choosing each option provides a varying set of other variables/options that one can set including options such as application arguments, operating path, record I/O performance performance counters to record (43 counters in all!), etc…  All in all, they give you the ability to profile many different .Net project types, and make it simple to do so.  In most cases of my using this application, I would be using the built in Visual Studio extensions, as they automatically start a new profiling project in ANTS with the options setup, and start your program, however RedGate has made it easy enough to profile outside of Visual Studio as well. On the flip side of this, as someone who lives most of their work life in Visual Studio, one thing I do wish is that instead of opening an entirely separate application/gui to perform profiling after launching, that instead they would provide a Visual Studio panel with the information, and integrate more of the profiling project information into Visual Studio.  So, now that we have an idea of what options that the profiler gives us, its time to test its abilities and features. Horrendous Example Code – Prime Number Generator One of my interests besides development, is Physics and Math – what I went to college for.  I have especially always been interested in prime numbers, as they are something of a mystery…  So, I decided that I would go ahead and to test the abilities of the profiler, I would write a small program, website, and library to generate prime numbers in the quantity that you ask for.  I am going to start off with some terrible code, and show how I would see the profiler being used as a development tool. First off, the IPrimes interface (all code is downloadable at the end of the post): interface IPrimes { IEnumerable<int> GetPrimes(int retrieve); } Simple enough, right?  Anything that implements the interface will (hopefully) provide an IEnumerable of int, with the quantity specified in the parameter argument.  Next, I am going to implement this interface in the most basic way: public class DumbPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _analyzing = 4; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; //start dividing at 2 //divide until number is reached, or determined not prime for (int i = 2; i < _analyzing && isPrime; i++) { //if (i) goes into _analyzing without a remainder, //_analyzing is NOT prime if (_analyzing % i == 0) isPrime = false; } //if it is prime, add to found list if (isPrime) _foundPrimes.Add(_analyzing); //increment number to analyze next _analyzing++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } This is the simplest way to get primes in my opinion.  Checking each number by the straight definition of a prime – is it divisible by anything besides 1 and itself. I have included this code in a base class library for my solution, as I am going to use it to demonstrate a couple of features of ANTS.  This class library is consumed by a simple non-MVVM WPF application, and a simple MVC4 website.  I will not post the WPF code here inline, as it is simply an ObservableCollection<int>, a label, two textbox’s, and a button. Starting a new Profiling Session So, in Visual Studio, I have just completed my first stint developing the GUI and DumbPrimes IPrimes class, so now I want to check my codes efficiency by profiling it.  All I have to do is build the solution (surprised initiating a profiling session doesn’t do this, but I suppose I can understand it), and then click the ANTS menu, followed by Profile Performance.  I am then greeted by the profiler starting up and already monitoring my program live: You are provided with a realtime graph at the top, and a pane at the bottom giving you information on how to proceed.  I am going to start by asking my program to show me the first 15000 primes: After the program finally began responding again (I did all the work on the main UI thread – how bad!), I stopped the profiler, which did kill the process of my program too.  One important thing to note, is that the profiler by default wants to give you a lot of detail about the operation – line hit counts, time per line, percent time per line, etc…  The important thing to remember is that this itself takes a lot of time.  When running my program without the profiler attached, it can generate the 15000 primes in 5.18 seconds, compared to 74.5 seconds – almost a 1500 percent increase.  While this may seem like a lot, remember that there is a trade off.  It may be WAY more inefficient, however, I am able to drill down and make improvements to specific problem areas, and then decrease execution time all around. Analyzing the Profiling Session After clicking ‘Stop Profiling’, the process running my application stopped, and the entire execution time was automatically selected by ANTS, and the results shown below: Now there are a number of interesting things going on here, I am going to cover each in a section of its own: Real Time Performance Counter Bar (top of screen) At the top of the screen, is the real time performance bar.  As your application is running, this will constantly update with the currently selected performance counters status.  A couple of cool things to note are the fact that you can drag a selection around specific time periods to drill down the detail views in the lower 2 panels to information pertaining to only that period. After selecting a time period, you can bookmark a section and name it, so that it is easy to find later, or after reloaded at a later time.  You can also zoom in, out, or fit the graph to the space provided – useful for drilling down. It may be hard to see, but at the top of the processor time graph below the time ticks, but above the red usage graph, there is a green bar. This bar shows at what times a method that is selected in the ‘Call tree’ panel is called. Very cool to be able to click on a method and see at what times it made an impact. As I said before, ANTS provides 43 different performance counters you can hook into.  Click the arrow next to the Performance tab at the top will allow you to change between different counters if you have them selected: Method Call Tree, ADO.Net Database Calls, File IO – Detail Panel Red Gate really hit the mark here I think. When you select a section of the run with the graph, the call tree populates to fill a hierarchical tree of method calls, with information regarding each of the methods.   By default, methods are hidden where the source is not provided (framework type code), however, Red Gate has integrated Reflector into ANTS, so even if you don’t have source for something, you can select a method and get the source if you want.  Methods are also hidden where the impact is seen as insignificant – methods that are only executed for 1% of the time of the overall calling methods time; in other words, working on making them better is not where your efforts should be focused. – Smart! Source Panel – Detail Panel The source panel is where you can see line level information on your code, showing the code for the currently selected method from the Method Call Tree.  If the code is not available, Reflector takes care of it and shows the code anyways! As you can notice, there does seem to be a problem with how ANTS determines what line is the actual line that a call is completed on.  I have suspicions that this may be due to some of the inline code optimizations that the CLR applies upon compilation of the assembly.  In a method with comments, the problem is much more severe: As you can see here, apparently the most offending code in my base library was a comment – *gasp*!  Removing the comments does help quite a bit, however I hope that Red Gate works on their counter algorithm soon to improve the logic on positioning for statistics: I did a small test just to demonstrate the lines are correct without comments. For me, it isn’t a deal breaker, as I can usually determine the correct placements by looking at the application code in the region and determining what makes sense, but it is something that would probably build up some irritation with time. Feature – Suggest Method for Optimization A neat feature to really help those in need of a pointer, is the menu option under tools to automatically suggest methods to optimize/improve: Nice feature – clicking it filters the call tree and stars methods that it thinks are good candidates for optimization.  I do wish that they would have made it more visible for those of use who aren’t great on sight: Process Integration I do think that this could have a place in my process.  After experimenting with the profiler, I do think it would be a great benefit to do some development, testing, and then after all the bugs are worked out, use the profiler to check on things to make sure nothing seems like it is hogging more than its fair share.  For example, with this program, I would have developed it, ran it, tested it – it works, but slowly. After looking at the profiler, and seeing the massive amount of time spent in 1 method, I might go ahead and try to re-implement IPrimes (I actually would probably rewrite the offending code, but so that I can distribute both sets of code easily, I’m just going to make another implementation of IPrimes).  Using two pieces of knowledge about prime numbers can make this method MUCH more efficient – prime numbers fall into two buckets 6k+/-1 , and a number is prime if it is not divisible by any other primes before it: public class SmartPrimes : IPrimes { public IEnumerable<int> GetPrimes(int retrieve) { //store a list of primes already found var _foundPrimes = new List<int>() { 2, 3 }; //if i ask for 1 or two primes, return what asked for if (retrieve <= _foundPrimes.Count()) return _foundPrimes.Take(retrieve); //the next number to look at int _k = 1; //since I already determined I don't have enough //execute at least once, and until quantity is sufficed do { //assume prime until otherwise determined bool isPrime = true; int potentialPrime; //analyze 6k-1 //assign the value to potential potentialPrime = 6 * _k - 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); if (_foundPrimes.Count() == retrieve) break; //analyze 6k+1 //assign the value to potential potentialPrime = 6 * _k + 1; //if there are any primes that divise this, it is NOT a prime number //using PLINQ for quick boost isPrime = !_foundPrimes.AsParallel() .Any(prime => potentialPrime % prime == 0); //if it is prime, add to found list if (isPrime) _foundPrimes.Add(potentialPrime); //increment k to analyze next _k++; } while (_foundPrimes.Count() < retrieve); return _foundPrimes; } } Now there are definitely more things I can do to help make this more efficient, but for the scope of this example, I think this is fine (but still hideous)! Profiling this now yields a happy surprise 27 seconds to generate the 15000 primes with the profiler attached, and only 1.43 seconds without.  One important thing I wanted to call out though was the performance graph now: Notice anything odd?  The %Processor time is above 100%.  This is because there is now more than 1 core in the operation.  A better label for the chart in my mind would have been %Core time, but to each their own. Another odd thing I noticed was that the profiler seemed to be spot on this time in my DumbPrimes class with line details in source, even with comments..  Odd. Profiling Web Applications The last thing that I wanted to cover, that means a lot to me as a web developer, is the great amount of work that Red Gate put into the profiler when profiling web applications.  In my solution, I have a simple MVC4 application setup with 1 page, a single input form, that will output prime values as my WPF app did.  Launching the profiler from Visual Studio as before, nothing is really different in the profiler window, however I did receive a UAC prompt for a Red Gate helper app to integrate with the web server without notification. After requesting 500, 1000, 2000, and 5000 primes, and looking at the profiler session, things are slightly different from before: As you can see, there are 4 spikes of activity in the processor time graph, but there is also something new in the call tree: That’s right – ANTS will actually group method calls by get/post operations, so it is easier to find out what action/page is giving the largest problems…  Pretty cool in my mind! Overview Overall, I think that Red Gate ANTS CLR Profiler has a lot to offer, however I think it also has a long ways to go.  3 Biggest Pros: Ability to easily drill down from time graph, to method calls, to source code Wide variety of counters to choose from when profiling your application Excellent integration/grouping of methods being called from web applications by request – BRILLIANT! 3 Biggest Cons: Issue regarding line details in source view Nit pick – Processor time vs. Core time Nit pick – Lack of full integration with Visual Studio Ratings Ease of Use (7/10) – I marked down here because of the problems with the line level details and the extra work that that entails, and the lack of better integration with Visual Studio. Effectiveness (10/10) – I believe that the profiler does EXACTLY what it purports to do.  Especially with its large variety of performance counters, a definite plus! Features (9/10) – Besides the real time performance monitoring, and the drill downs that I’ve shown here, ANTS also has great integration with ADO.Net, with the ability to show database queries run by your application in the profiler.  This, with the line level details, the web request grouping, reflector integration, and various options to customize your profiling session I think create a great set of features! Customer Service (10/10) – My entire experience with Red Gate personnel has been nothing but good.  their people are friendly, helpful, and happy! UI / UX (8/10) – The interface is very easy to get around, and all of the options are easy to find.  With a little bit of poking around, you’ll be optimizing Hello World in no time flat! Overall (8/10) – Overall, I am happy with the Performance Profiler and its features, as well as with the service I received when working with the Red Gate personnel.  I WOULD recommend you trying the application and seeing if it would fit into your process, BUT, remember there are still some kinks in it to hopefully be worked out. My next post will definitely be shorter (hopefully), but thank you for reading up to here, or skipping ahead!  Please, if you do try the product, drop me a message and let me know what you think!  I would love to hear any opinions you may have on the product. Code Feel free to download the code I used above – download via DropBox

    Read the article

< Previous Page | 322 323 324 325 326 327 328 329 330 331 332 333  | Next Page >