Search Results

Search found 17744 results on 710 pages for 'target mode'.

Page 99/710 | < Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >

  • How to implement dynamic binary search for search and insert operations of n element (C or C++)

    - by iecut
    The idea is to use multiple arrays, each of length 2^k, to store n elements, according to binary representation of n.Each array is sorted and different arrays are not ordered in any way. In the above mentioned data structure, SEARCH is carried out by a sequence of binary search on each array. INSERT is carried out by a sequence of merge of arrays of the same length until an empty array is reached. More Detail: Lets suppose we have a vertical array of length 2^k and to each node of that array there attached horizontal array of length 2^k. That is, to the first node of vertical array, a horizontal array of length 2^0=1 is connected,to the second node of vertical array, a horizontal array of length 2^1= 2 is connected and so on. So the insert is first carried out in the first horizontal array, for the second insert the first array becomes empty and second horizontal array is full with 2 elements, for the third insert 1st and 2nd array horiz. array are filled and so on. I implemented the normal binary search for search and insert as follows: int main() { int a[20]= {0}; int n, i, j, temp; int *beg, *end, *mid, target; printf(" enter the total integers you want to enter (make it less then 20):\n"); scanf("%d", &n); if (n = 20) return 0; printf(" enter the integer array elements:\n" ); for(i = 0; i < n; i++) { scanf("%d", &a[i]); } // sort the loaded array, binary search! for(i = 0; i < n-1; i++) { for(j = 0; j < n-i-1; j++) { if (a[j+1] < a[j]) { temp = a[j]; a[j] = a[j+1]; a[j+1] = temp; } } } printf(" the sorted numbers are:"); for(i = 0; i < n; i++) { printf("%d ", a[i]); } // point to beginning and end of the array beg = &a[0]; end = &a[n]; // use n = one element past the loaded array! // mid should point somewhere in the middle of these addresses mid = beg += n/2; printf("\n enter the number to be searched:"); scanf("%d",&target); // binary search, there is an AND in the middle of while()!!! while((beg <= end) && (*mid != target)) { // is the target in lower or upper half? if (target < *mid) { end = mid - 1; // new end n = n/2; mid = beg += n/2; // new middle } else { beg = mid + 1; // new beginning n = n/2; mid = beg += n/2; // new middle } } // find the target? if (*mid == target) { printf("\n %d found!", target); } else { printf("\n %d not found!", target); } getchar(); // trap enter getchar(); // wait return 0; } Could anyone please suggest how to modify this program or a new program to implement dynamic binary search that works as explained above!!

    Read the article

  • jquery remove selected element and append to another

    - by KnockKnockWhosThere
    I'm trying to re-append a "removed option" to the appropriate select option menu. I have three select boxes: "Categories", "Variables", and "Target". "Categories" is a chained select, so when the user selects an option from it, the "Variables" select box is populated with options specific to the selected categories option. When the user chooses an option from the "Variables" select box, it's appended to the "Target" select box. I have a "remove selected" feature so that if a user "removes" a selected element from the "Target" select box, it's removed from "Target" and put back into the pool of "Variables" options. The problem I'm having is that it appends the option to the "Variables" items indiscriminately. That is, if the selected category is "Age" the "Variables" options all have a class of "age". But, if the removed option is an "income" item, it will display in the "Age Variables" option list. Here's the HTML markup: <select multiple="" id="categories" name="categories[]"> <option class="category" value="income">income</option> <option class="category" value="gender">gender</option> <option class="category" value="age">age</option> </select> <select multiple="multiple" id="variables" name="variables[]"> <option class="income" value="10">$90,000 - $99,999</option> <option class="income" value="11">$100,000 - $124,999</option> <option class="income" value="12">$125,000 - $149,999</option> <option class="income" value="13">Greater than $149,999</option> <option class="gender" value="14">Male</option> <option class="gender" value="15">Female</option> <option class="gender" value="16">Ungendered</option> <option class="age" value="17">Ages 18-24</option> <option class="age" value="18">Ages 25-34</option> <option class="age" value="19">Ages 35-44</option> </select> <select height="60" multiple="multiple" id="target" name="target[]"> </select> And, here's the js: /* This determines what options are display in the "Variables" select box */ var cat = $('#categories'); var el = $('#variables'); $('#categories option').click(function() { var class = $(this).val(); $('#variables option').each(function() { if($(this).hasClass(class)) { $(this).show(); } else { $(this).hide(); } }); }); /* This adds the option to the target select box if the user clicks "add" */ $('#add').click(function() { return !$('#variables option:selected').appendTo('#target'); }); /* This is the remove function in its current form, but doesn't append correctly */ $('#remove').click(function() { $('#target option:selected').each(function() { var class = $(this).attr('class'); if($('#variables option').hasClass(class)) { $(this).appendTo('#variables'); sortList('variables'); } }); });

    Read the article

  • What happens when you add/remove current site as trusted site?

    - by kasey
    What happens when you add/remove the current site, while logged on, as a trusted site? When users do this on our website, and then try to click on a link or close the browser, they get the following JavaScript exception: "Microsoft JScript runtime error: 'type' is null or not an object" in the below library code at the line "var etype = this.type = e.type.toLowerCase();" Sys.UI.DomEvent = function Sys$UI$DomEvent(eventObject) { /// <summary locid="M:J#Sys.UI.DomEvent.#ctor" /> /// <param name="eventObject"></param> /// <field name="altKey" type="Boolean" locid="F:J#Sys.UI.DomEvent.altKey"></field> /// <field name="button" type="Sys.UI.MouseButton" locid="F:J#Sys.UI.DomEvent.button"></field> /// <field name="charCode" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.charCode"></field> /// <field name="clientX" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.clientX"></field> /// <field name="clientY" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.clientY"></field> /// <field name="ctrlKey" type="Boolean" locid="F:J#Sys.UI.DomEvent.ctrlKey"></field> /// <field name="keyCode" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.keyCode"></field> /// <field name="offsetX" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.offsetX"></field> /// <field name="offsetY" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.offsetY"></field> /// <field name="screenX" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.screenX"></field> /// <field name="screenY" type="Number" integer="true" locid="F:J#Sys.UI.DomEvent.screenY"></field> /// <field name="shiftKey" type="Boolean" locid="F:J#Sys.UI.DomEvent.shiftKey"></field> /// <field name="target" locid="F:J#Sys.UI.DomEvent.target"></field> /// <field name="type" type="String" locid="F:J#Sys.UI.DomEvent.type"></field> var e = Function._validateParams(arguments, [ {name: "eventObject"} ]); if (e) throw e; var e = eventObject; var etype = this.type = e.type.toLowerCase(); this.rawEvent = e; this.altKey = e.altKey; if (typeof(e.button) !== 'undefined') { this.button = (typeof(e.which) !== 'undefined') ? e.button : (e.button === 4) ? Sys.UI.MouseButton.middleButton : (e.button === 2) ? Sys.UI.MouseButton.rightButton : Sys.UI.MouseButton.leftButton; } if (etype === 'keypress') { this.charCode = e.charCode || e.keyCode; } else if (e.keyCode && (e.keyCode === 46)) { this.keyCode = 127; } else { this.keyCode = e.keyCode; } this.clientX = e.clientX; this.clientY = e.clientY; this.ctrlKey = e.ctrlKey; this.target = e.target ? e.target : e.srcElement; if (!etype.startsWith('key')) { if ((typeof(e.offsetX) !== 'undefined') && (typeof(e.offsetY) !== 'undefined')) { this.offsetX = e.offsetX; this.offsetY = e.offsetY; } else if (this.target && (this.target.nodeType !== 3) && (typeof(e.clientX) === 'number')) { var loc = Sys.UI.DomElement.getLocation(this.target); var w = Sys.UI.DomElement._getWindow(this.target); this.offsetX = (w.pageXOffset || 0) + e.clientX - loc.x; this.offsetY = (w.pageYOffset || 0) + e.clientY - loc.y; } } this.screenX = e.screenX; this.screenY = e.screenY; this.shiftKey = e.shiftKey; } Note: the site does not require trusted privileges to function correctly.

    Read the article

  • Choosing a VS project type (C++)

    - by typoknig
    Hi all, I do not use C++ much (I try to stick to the easier stuff like Java and VB.NET), but the lately I have not had a choice. When I am picking a project type in VS for some C++ source I download, what project type should I pick? I had just been sticking with Win32 Console Applications, but I just downloaded some code (below) that will not work right even when it compiles with out errors. I have tried to use a CLR Console Application and an empty project too, and have changed many variables along the way, but I cannot get this code to work. I noticed that this code does not have "int main()" at its beginning, does that have something to do with it? Anyways, here is the code, got it from here: /* Demo of modified Lucas-Kanade optical flow algorithm. See the printf below */ #ifdef _CH_ #pragma package <opencv> #endif #ifndef _EiC #include "cv.h" #include "highgui.h" #include <stdio.h> #include <ctype.h> #endif #include <windows.h> #define FULL_IMAGE_AS_OUTPUT_FILE #define cvMirror cvFlip //IplImage *image = 0, *grey = 0, *prev_grey = 0, *pyramid = 0, *prev_pyramid = 0, *swap_temp; IplImage **buf = 0; IplImage *image1 = 0; IplImage *imageCopy=0; IplImage *image = 0; int win_size = 10; const int MAX_COUNT = 500; CvPoint2D32f* points[2] = {0,0}, *swap_points; char* status = 0; //int count = 0; //int need_to_init = 0; //int night_mode = 0; int flags = 0; //int add_remove_pt = 0; bool bLButtonDown = false; //bool bstopLoop = false; CvPoint pt, pt1,pt2; //IplImage* img1; FILE* FileDest; char* strImageDir = "E:\\Projects\\TSCreator\\Images"; char* strItemName = "b"; int imageCount=0; int bFirstFace = 1; // flag for first face int mode = 1; // Mode 1 - Haar Traing Sample Creation, 2 - HMM sample creation, Mode = 3 - Both Harr and HMM. //int startImgeNo = 1; bool isEqualRation = false; //Weidth to height ratio is equal //Selected Image data IplImage *selectedImage = 0; int selectedX = 0, selectedY = 0, currentImageNo = 0, selectedWidth = 0, selectedHeight= 0; CvRect selectedROI; void saveFroHarrTraining(IplImage *src, int x, int y, int width, int height, int imageCount); void saveForHMMTraining(IplImage *src, CvRect roi,int imageCount); // Code for draw ROI Cropping Image void on_mouse( int event, int x, int y, int flags, void* param ) { char f[200]; CvRect reg; if( !image ) return; if( event == CV_EVENT_LBUTTONDOWN ) { bLButtonDown = true; pt1.x = x; pt1.y = y; } else if ( event == CV_EVENT_MOUSEMOVE ) //Draw the selected area rectangle { pt2.x = x; pt2.y = y; if(bLButtonDown) { if( !image1 ) { /* allocate all the buffers */ image1 = cvCreateImage( cvGetSize(image), 8, 3 ); image1->origin = image->origin; points[0] = (CvPoint2D32f*)cvAlloc(MAX_COUNT*sizeof(points[0][0])); points[1] = (CvPoint2D32f*)cvAlloc(MAX_COUNT*sizeof(points[0][0])); status = (char*)cvAlloc(MAX_COUNT); flags = 0; } cvCopy( image, image1, 0 ); //Equal Weight-Height Ratio if(isEqualRation) { pt2.y = pt1.y + (pt2.x-pt1.x); } //Max Height and Width is the image width and height if(pt2.x>image->width) { pt2.x = image->width; } if(pt2.y>image->height) { pt2.y = image->height; } CvPoint InnerPt1 = pt1; CvPoint InnerPt2 = pt2; if ( InnerPt1.x > InnerPt2.x) { int tempX = InnerPt1.x; InnerPt1.x = InnerPt2.x; InnerPt2.x = tempX; } if ( pt2.y < InnerPt1.y ) { int tempY = InnerPt1.y; InnerPt1.y = InnerPt2.y; InnerPt2.y = tempY; } InnerPt1.y = image->height - InnerPt1.y; InnerPt2.y = image->height - InnerPt2.y; CvFont font; double hScale=1.0; double vScale=1.0; int lineWidth=1; cvInitFont(&font,CV_FONT_HERSHEY_SIMPLEX|CV_FONT_ITALIC, hScale,vScale,0,lineWidth); char size [200]; reg.x = pt1.x; reg.y = image->height - pt2.y; reg.height = abs (pt2.y - pt1.y); reg.width = InnerPt2.x -InnerPt1.x; //print width and heght of the selected reagion sprintf(size, "(%dx%d)",reg.width, reg.height); cvPutText (image1,size,cvPoint(10,10), &font, cvScalar(255,255,0)); cvRectangle(image1, InnerPt1, InnerPt2, CV_RGB(255,0,0), 1); //Mark Selected Reagion selectedImage = image; selectedX = pt1.x; selectedY = pt1.y; selectedWidth = reg.width; selectedHeight = reg.height; selectedROI = reg; //Show the modified image cvShowImage("HMM-Harr Positive Image Creator",image1); } } else if ( event == CV_EVENT_LBUTTONUP ) { bLButtonDown = false; // pt2.x = x; // pt2.y = y; // // if ( pt1.x > pt2.x) // { // int tempX = pt1.x; // pt1.x = pt2.x; // pt2.x = tempX; // } // // if ( pt2.y < pt1.y ) // { // int tempY = pt1.y; // pt1.y = pt2.y; // pt2.y = tempY; // // } // //reg.x = pt1.x; //reg.y = image->height - pt2.y; // //reg.height = abs (pt2.y - pt1.y); ////reg.width = reg.height/3; //reg.width = pt2.x -pt1.x; ////reg.height = (2 * reg.width)/3; #ifdef FULL_IMAGE_AS_OUTPUT_FILE CvRect FullImageRect; FullImageRect.x = 0; FullImageRect.y = 0; FullImageRect.width = image->width; FullImageRect.height = image->height; IplImage *regionFullImage =0; regionFullImage = cvCreateImage(cvSize (FullImageRect.width, FullImageRect.height), image->depth, image->nChannels); image->roi = NULL; //cvSetImageROI (image, FullImageRect); //cvCopy (image, regionFullImage, 0); #else IplImage *region =0; region = cvCreateImage(cvSize (reg.width, reg.height), image1->depth, image1->nChannels); image->roi = NULL; cvSetImageROI (image1, reg); cvCopy (image1, region, 0); #endif //cvNamedWindow("Result", CV_WINDOW_AUTOSIZE); //selectedImage = image; //selectedX = pt1.x; //selectedY = pt1.y; //selectedWidth = reg.width; //selectedHeight = reg.height; ////currentImageNo = startImgeNo; //selectedROI = reg; /*if(mode == 1) { saveFroHarrTraining(image,pt1.x,pt1.y,reg.width,reg.height,startImgeNo); } else if(mode == 2) { saveForHMMTraining(image,reg,startImgeNo); } else if(mode ==3) { saveFroHarrTraining(image,pt1.x,pt1.y,reg.width,reg.height,startImgeNo); saveForHMMTraining(image,reg,startImgeNo); } else { printf("Invalid mode."); } startImgeNo++;*/ } } /* Save popsitive samples for Harr Training. Also add an entry to the PositiveSample.txt with the location of the item of interest. */ void saveFroHarrTraining(IplImage *src, int x, int y, int width, int height, int imageCount) { char f[255] ; sprintf(f,"%s\\%s\\harr_%s%d%d.jpg",strImageDir,strItemName,strItemName,imageCount/10, imageCount%10); cvNamedWindow("Harr", CV_WINDOW_AUTOSIZE); cvShowImage("Harr", src); cvSaveImage(f, src); printf("output%d%d \t ", imageCount/10, imageCount%10); printf("width %d \t", width); printf("height %d \t", height); printf("x1 %d \t", x); printf("y1 %d \t\n", y); char f1[255]; sprintf(f1,"%s\\PositiveSample.txt",strImageDir); FileDest = fopen(f1, "a"); fprintf(FileDest, "%s\\harr_%s%d.jpg 1 %d %d %d %d \n",strItemName,strItemName, imageCount, x, y, width, height); fclose(FileDest); } /* Create Sample Images for HMM recognition algorythm trai ning. */ void saveForHMMTraining(IplImage *src, CvRect roi,int imageCount) { char f[255] ; printf("x=%d, y=%d, w= %d, h= %d\n",roi.x,roi.y,roi.width,roi.height); //Create the file name sprintf(f,"%s\\%s\\hmm_%s%d.pgm",strImageDir,strItemName,strItemName, imageCount); //Create storage for grayscale image IplImage* gray = cvCreateImage(cvSize(roi.width,roi.height), 8, 1); //Create storage for croped reagon IplImage* regionFullImage = cvCreateImage(cvSize(roi.width,roi.height),8,3); //Croped marked region cvSetImageROI(src,roi); cvCopy(src,regionFullImage); cvResetImageROI(src); //Flip croped image - otherwise it will saved upside down cvConvertImage(regionFullImage, regionFullImage, CV_CVTIMG_FLIP); //Convert croped image to gray scale cvCvtColor(regionFullImage,gray, CV_BGR2GRAY); //Show final grayscale image cvNamedWindow("HMM", CV_WINDOW_AUTOSIZE); cvShowImage("HMM", gray); //Save final grayscale image cvSaveImage(f, gray); } int maina( int argc, char** argv ) { CvCapture* capture = 0; //if( argc == 1 || (argc == 2 && strlen(argv[1]) == 1 && isdigit(argv[1][0]))) // capture = cvCaptureFromCAM( argc == 2 ? argv[1][0] - '0' : 0 ); //else if( argc == 2 ) // capture = cvCaptureFromAVI( argv[1] ); char* video; if(argc ==7) { mode = atoi(argv[1]); strImageDir = argv[2]; strItemName = argv[3]; video = argv[4]; currentImageNo = atoi(argv[5]); int a = atoi(argv[6]); if(a==1) { isEqualRation = true; } else { isEqualRation = false; } } else { printf("\nUsage: TSCreator.exe <Mode> <Sample Image Save Path> <Sample Image Save Directory> <Video File Location> <Start Image No> <Is Equal Ratio>\n"); printf("Mode = 1 - Haar Traing Sample Creation. \nMode = 2 - HMM sample creation.\nMode = 3 - Both Harr and HMM\n"); printf("Is Equal Ratio = 0 or 1. 1 - Equal weidth and height, 0 - custom."); printf("Note: You have to create the image save directory in correct path first.\n"); printf("Eg: TSCreator.exe 1 E:\Projects\TSCreator\Images A 11.avi 1 1\n\n"); return 0; } capture = cvCaptureFromAVI(video); if( !capture ) { fprintf(stderr,"Could not initialize capturing...\n"); return -1; } cvNamedWindow("HMM-Harr Positive Image Creator", CV_WINDOW_AUTOSIZE); cvSetMouseCallback("HMM-Harr Positive Image Creator", on_mouse, 0); //cvShowImage("Test", image1); for(;;) { IplImage* frame = 0; int i, k, c; frame = cvQueryFrame( capture ); if( !frame ) break; if( !image ) { /* allocate all the buffers */ image = cvCreateImage( cvGetSize(frame), 8, 3 ); image->origin = frame->origin; //grey = cvCreateImage( cvGetSize(frame), 8, 1 ); //prev_grey = cvCreateImage( cvGetSize(frame), 8, 1 ); //pyramid = cvCreateImage( cvGetSize(frame), 8, 1 ); // prev_pyramid = cvCreateImage( cvGetSize(frame), 8, 1 ); points[0] = (CvPoint2D32f*)cvAlloc(MAX_COUNT*sizeof(points[0][0])); points[1] = (CvPoint2D32f*)cvAlloc(MAX_COUNT*sizeof(points[0][0])); status = (char*)cvAlloc(MAX_COUNT); flags = 0; } cvCopy( frame, image, 0 ); // cvCvtColor( image, grey, CV_BGR2GRAY ); cvShowImage("HMM-Harr Positive Image Creator", image); cvSetMouseCallback("HMM-Harr Positive Image Creator", on_mouse, 0); c = cvWaitKey(0); if((char)c == 's') { //Save selected reagion as training data if(selectedImage) { printf("Selected Reagion Saved\n"); if(mode == 1) { saveFroHarrTraining(selectedImage,selectedX,selectedY,selectedWidth,selectedHeight,currentImageNo); } else if(mode == 2) { saveForHMMTraining(selectedImage,selectedROI,currentImageNo); } else if(mode ==3) { saveFroHarrTraining(selectedImage,selectedX,selectedY,selectedWidth,selectedHeight,currentImageNo); saveForHMMTraining(selectedImage,selectedROI,currentImageNo); } else { printf("Invalid mode."); } currentImageNo++; } } } cvReleaseCapture( &capture ); //cvDestroyWindow("HMM-Harr Positive Image Creator"); cvDestroyAllWindows(); return 0; } #ifdef _EiC main(1,"lkdemo.c"); #endif If I put... #include "stdafx.h" int _tmain(int argc, _TCHAR* argv[]) { return 0; } ... before the previous code (and link it to the correct OpenCV .lib files) it compiles without errors, but does nothing at the command line. How do I make it work?

    Read the article

  • 17 new features in Visual Studio 2010

    - by vik20000in
    Visual studio 2010 has been released to RTM a few days back. This release of Visual studio 2010 comes with a big number of improvements on many fronts. In this post I will try and point out some of the major improvements in Visual Studio 2010. 1)      Visual studio IDE Improvement. Visual studio IDE has been rewritten in WPF. The look and feel of the studio has been improved for improved readability. Start page has been redesigned and template so that anyone can change the start page as they wish. 2)      Multiple Monitor - Support for Multiple Monitor was already there in Visual studio. But in this edition it has been improved as much that we can now place the document, design and code window outside the IDE in another monitor. 3)      ZOOM in Code Editor – Making the editors in WPF has made significant improvement for them. The best one that I like is the ZOOM feature. We can now zoom in the code editor with the help of the ctrl + Mouse scroll. The zoom feature does not work on the Design surface or windows with icon like solution view and toolbox. 4)      Box Selection - Another Important improvement in the Visual studio 2010 is the box selection. We can select a rectangular by holding down the Alt Key and selecting with mouse.  Now in the rectangular selection we can insert text, Paste same code in different line etc. This is helpful if you want to convert a number of variables from public to private etc… 5)      New Improved Search – One of the best productivity improvements in Visual studio 2010 is its new search as you type support. This has been done in the Navigate To window which can be brought up by pressing (Ctrl + ,). The navigate To windows also take help of the Camel casing and will be able to search with the help of camel casing when character is entered in upper case. For example we can search AOH for AddOrederHeader. 6)      Call Hierarchy – This feature is only available to the Visual C# and Visual C++ editor. The call hierarchy windows displays the calls made to and from (yes both to and from) a selected method property or a constructor. The call hierarchy also shows the implementation of interface and the overrides of virtual or abstract methods. This window is very helpful in understanding the code flow, and evaluating the effect of making changes. The best part is it is available at design time and not at runtime only like a debugger. 7)      Highlighting references – One of the very cool stuff in Visual Studio 2010 is the fact if you select a variable then all the use of that variable will be highlighted alongside. This should work for all the result of symbols returned by Find all reference. This also works for Name of class, objects variable, properties and methods. We can also use the Ctrl + Shift + Down Arrow or Up Arror to move through them. 8)      Generate from usage - The Generate from usage feature lets you use classes and members before you define them. You can generate a stub for any undefined class, constructor, method, property, field, or enum that you want to use but have not yet defined. You can generate new types and members without leaving your current location in code, This minimizes interruption to your workflow.9)      IntelliSense Suggestion Mode - IntelliSense now provides two alternatives for IntelliSense statement completion, completion mode and suggestion mode. Use suggestion mode for situations where classes and members are used before they are defined. In suggestion mode, when you type in the editor and then commit the entry, the text you typed is inserted into the code. When you commit an entry in completion mode, the editor shows the entry that is highlighted on the members list. When an IntelliSense window is open, you can press CTRL+ALT+SPACEBAR to toggle between completion mode and suggestion mode. 10)   Application Lifecycle Management – A client application for management of application lifecycle like version control, work item tracking, build automation, team portal etc is available for free (this is not available for express edition.). 11)   Start Page – The start page has been redesigned with WPF for new functionality and look. Tabbed areas are provided for content from different source including MSDN. Once you open some project the start page closes automatically. The list of recent project also lets you remove project from the list. And above all the start page is customizable enough to be changed as per individual requirement. 12)   Extension Manager – Visual Studio 2010 has provided good ways to be extended. We can also use MEF to extend most of the features of Visual Studio. The new extension manager now can go the visual studio gallery and install the extension without even opening any explorer. 13)   Code snippets – Visual studio 2010 for HTML, Jscript and Asp.net also. 14)   Improved Intelligence for JavaScript has been improved vastly (around 2-5 times). Intelligence now also shows the XML documentation comment on the go. 15)   Web Deployment – Web Deployment has been vastly improved. We can package and publish the web application in one click. Three major supported deployment scenarios are Web packages, one click deployment and Web configuration Transformation. 16)   SharePoint - Visual Studio 2010 also brings vastly improved development experience for SharePoint. We can create, edit, debug, package, deploy and activate SharePoint project from within Visual Studio. Deployment of Site is as easy as hitting F5. 17)   Azure – Visual Studio 2010 also comes with handy improvement for developing on windows Azure environment. Vikram

    Read the article

  • Amazon Web Services (AWS) Plug-in for Oracle Enterprise Manager

    - by Anand Akela
    v\:* {behavior:url(#default#VML);} o\:* {behavior:url(#default#VML);} w\:* {behavior:url(#default#VML);} .shape {behavior:url(#default#VML);} Normal 0 false false false false EN-US X-NONE X-NONE /* Style Definitions */ table.MsoNormalTable {mso-style-name:"Table Normal"; mso-tstyle-rowband-size:0; mso-tstyle-colband-size:0; mso-style-noshow:yes; mso-style-priority:99; mso-style-qformat:yes; mso-style-parent:""; mso-padding-alt:0in 5.4pt 0in 5.4pt; mso-para-margin:0in; mso-para-margin-bottom:.0001pt; mso-pagination:widow-orphan; font-size:10.0pt; font-family:"Calibri","sans-serif"; mso-bidi-font-family:"Times New Roman";} Contributed by Sunil Kunisetty and Daniel Chan Introduction and ArchitectureAs more and more enterprises deploy some of their non-critical workload on Amazon Web Services (AWS), it’s becoming critical to monitor those public AWS resources along side with their on-premise resources. Oracle recently announced Oracle Enterprise Manager Plug-in for Amazon Web Services (AWS) allows you to achieve that goal. The on-premise Oracle Enterprise Manager (EM12c) acts as a single tool to get a comprehensive view of your public AWS resources as well as your private cloud resources.  By deploying the plug-in within your Cloud Control environment, you gain the following management features: Monitor EBS, EC2 and RDS instances on Amazon Web Services Gather performance metrics and configuration details for AWS instances Raise alerts and violations based on thresholds set on monitoring Generate reports based on the gathered data Users of this Plug-in can leverage the rich Enterprise Manager features such as system promotion, incident generation based on thresholds, integration with 3rd party ticketing applications etc. AWS Monitoring via this Plug-in is enabled via Amazon CloudWatch API and the users of this Plug-in are responsible for supplying credentials for accessing AWS and the CloudWatch API. This Plug-in can only be deployed on an EM12C R2 platform and agent version should be at minimum 12c R2.Here is a pictorial view of the overall architecture: Amazon Elastic Block Store (EBS) Amazon Elastic Compute Cloud (EC2) Amazon Relational Database Service (RDS) Here are a few key features: Rich and exhaustive list of metrics. Metrics can be gathered from an Agent running outside AWS. Critical configuration information. Custom Home Pages with charts and AWS configuration information. Generate incidents based on thresholds set on monitoring data. Discovery and Monitoring AWS instances can be added to EM12C either via the EM12c User Interface (UI) or the EM12c Command Line Interface ( EMCLI)  by providing the AWS credentials (Secret Key and Access Key Id) as well as resource specific properties as target properties. Here is a quick mapping of target types and properties for each AWS resources AWS Resource Type Target Type Resource specific properties EBS Resource Amazon EBS Service CloudWatch base URI, EC2 Base URI, Period, Volume Id, Proxy Server and Port EC2 Resource Amazon EC2 Service CloudWatch base URI, EC2 Base URI, Period, Instance  Id, Proxy Server and Port RDS Resource Amazon RDS Service CloudWatch base URI, RDS Base URI, Period, Instance  Id, Proxy Server and Port Proxy server and port are optional and are only needed if the agent is within the firewall. Here is an emcli example to add an EC2 target. Please read the Installation and Readme guide for more details and step-by-step instructions to deploy  the plugin and adding the AWS the instances. ./emcli add_target \       -name="<target name>" \       -type="AmazonEC2Service" \       -host="<host>" \       -properties="ProxyHost=<proxy server>;ProxyPort=<proxy port>;EC2_BaseURI=http://ec2.<region>.amazonaws.com;BaseURI=http://monitoring.<region>.amazonaws.com;InstanceId=<EC2 instance Id>;Period=<data point periond>"  \     -subseparator=properties="=" ./emcli set_monitoring_credential \                 -set_name="AWSKeyCredentialSet"  \                 -target_name="<target name>"  \                 -target_type="AmazonEC2Service" \                 -cred_type="AWSKeyCredential"  \                 -attributes="AccessKeyId:<access key id>;SecretKey:<secret key>" Emcli utility is found under the ORACLE_HOME of EM12C install. Once the instance is discovered, the target will show up under the ‘All Targets’ list under “Amazon EC2 Service’. Once the instances are added, one can navigate to the custom homepages for these resource types. The custom home pages not only include critical metrics, but also vital configuration parameters and incidents raised for these instances.  By mapping the configuration parameters as instance properties, we can slice-and-dice and group various AWS instance by leveraging the EM12C Config search feature. The following configuration properties and metrics are collected for these Resource types. Resource Type Configuration Properties Metrics EBS Resource Volume Id, Volume Type, Device Name, Size, Availability Zone Response: Status Utilization: QueueLength, IdleTime Volume Statistics: ReadBrandwith, WriteBandwidth, ReadThroughput, WriteThroughput Operation Statistics: ReadSize, WriteSize, ReadLatency, WriteLatency EC2 Resource Instance ID, Owner Id, Root Device type, Instance Type. Availability Zone Response: Status CPU Utilization: CPU Utilization Disk I/O:  DiskReadBytes, DiskWriteBytes, DiskReadOps, DiskWriteOps, DiskReadRate, DiskWriteRate, DiskIOThroughput, DiskReadOpsRate, DiskWriteOpsRate, DiskOperationThroughput Network I/O : NetworkIn, NetworkOut, NetworkInRate, NetworkOutRate, NetworkThroughput RDS Resource Instance ID, Database Engine Name, Database Engine Version, Database Instance Class, Allocated Storage Size, Availability Zone Response: Status Disk I/O:  ReadIOPS, WriteIOPS, ReadLatency, WriteLatency, ReadThroughput, WriteThroughput DB Utilization:  BinLogDiskUsage, CPUUtilization, DatabaseConnections, FreeableMemory, ReplicaLag, SwapUsage Custom Home Pages As mentioned above, we have custom home pages for these target types that include basic configuration information,  last 24 hours availability, top metrics and the incidents generated. Here are few snapshots. EBS Instance Home Page: EC2 Instance Home Page: RDS Instance Home Page: Further Reading: 1)      AWS Plugin download 2)      Installation and  Read Me. 3)      Screenwatch on SlideShare 4)      Extensibility Programmer's Guide 5)      Amazon Web Services

    Read the article

  • Optimizing AES modes on Solaris for Intel Westmere

    - by danx
    Optimizing AES modes on Solaris for Intel Westmere Review AES is a strong method of symmetric (secret-key) encryption. It is a U.S. FIPS-approved cryptographic algorithm (FIPS 197) that operates on 16-byte blocks. AES has been available since 2001 and is widely used. However, AES by itself has a weakness. AES encryption isn't usually used by itself because identical blocks of plaintext are always encrypted into identical blocks of ciphertext. This encryption can be easily attacked with "dictionaries" of common blocks of text and allows one to more-easily discern the content of the unknown cryptotext. This mode of encryption is called "Electronic Code Book" (ECB), because one in theory can keep a "code book" of all known cryptotext and plaintext results to cipher and decipher AES. In practice, a complete "code book" is not practical, even in electronic form, but large dictionaries of common plaintext blocks is still possible. Here's a diagram of encrypting input data using AES ECB mode: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 What's the solution to the same cleartext input producing the same ciphertext output? The solution is to further process the encrypted or decrypted text in such a way that the same text produces different output. This usually involves an Initialization Vector (IV) and XORing the decrypted or encrypted text. As an example, I'll illustrate CBC mode encryption: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ IV >----->(XOR) +------------->(XOR) +---> . . . . | | | | | | | | \/ | \/ | AESKey-->(AES Encryption) | AESKey-->(AES Encryption) | | | | | | | | | \/ | \/ | CipherTextOutput ------+ CipherTextOutput -------+ Block 1 Block 2 The steps for CBC encryption are: Start with a 16-byte Initialization Vector (IV), choosen randomly. XOR the IV with the first block of input plaintext Encrypt the result with AES using a user-provided key. The result is the first 16-bytes of output cryptotext. Use the cryptotext (instead of the IV) of the previous block to XOR with the next input block of plaintext Another mode besides CBC is Counter Mode (CTR). As with CBC mode, it also starts with a 16-byte IV. However, for subsequent blocks, the IV is just incremented by one. Also, the IV ix XORed with the AES encryption result (not the plain text input). Here's an illustration: Block 1 Block 2 PlainTextInput PlainTextInput | | | | \/ \/ AESKey-->(AES Encryption) AESKey-->(AES Encryption) | | | | \/ \/ IV >----->(XOR) IV + 1 >---->(XOR) IV + 2 ---> . . . . | | | | \/ \/ CipherTextOutput CipherTextOutput Block 1 Block 2 Optimization Which of these modes can be parallelized? ECB encryption/decryption can be parallelized because it does more than plain AES encryption and decryption, as mentioned above. CBC encryption can't be parallelized because it depends on the output of the previous block. However, CBC decryption can be parallelized because all the encrypted blocks are known at the beginning. CTR encryption and decryption can be parallelized because the input to each block is known--it's just the IV incremented by one for each subsequent block. So, in summary, for ECB, CBC, and CTR modes, encryption and decryption can be parallelized with the exception of CBC encryption. How do we parallelize encryption? By interleaving. Usually when reading and writing data there are pipeline "stalls" (idle processor cycles) that result from waiting for memory to be loaded or stored to or from CPU registers. Since the software is written to encrypt/decrypt the next data block where pipeline stalls usually occurs, we can avoid stalls and crypt with fewer cycles. This software processes 4 blocks at a time, which ensures virtually no waiting ("stalling") for reading or writing data in memory. Other Optimizations Besides interleaving, other optimizations performed are Loading the entire key schedule into the 128-bit %xmm registers. This is done once for per 4-block of data (since 4 blocks of data is processed, when present). The following is loaded: the entire "key schedule" (user input key preprocessed for encryption and decryption). This takes 11, 13, or 15 registers, for AES-128, AES-192, and AES-256, respectively The input data is loaded into another %xmm register The same register contains the output result after encrypting/decrypting Using SSSE 4 instructions (AESNI). Besides the aesenc, aesenclast, aesdec, aesdeclast, aeskeygenassist, and aesimc AESNI instructions, Intel has several other instructions that operate on the 128-bit %xmm registers. Some common instructions for encryption are: pxor exclusive or (very useful), movdqu load/store a %xmm register from/to memory, pshufb shuffle bytes for byte swapping, pclmulqdq carry-less multiply for GCM mode Combining AES encryption/decryption with CBC or CTR modes processing. Instead of loading input data twice (once for AES encryption/decryption, and again for modes (CTR or CBC, for example) processing, the input data is loaded once as both AES and modes operations occur at in the same function Performance Everyone likes pretty color charts, so here they are. I ran these on Solaris 11 running on a Piketon Platform system with a 4-core Intel Clarkdale processor @3.20GHz. Clarkdale which is part of the Westmere processor architecture family. The "before" case is Solaris 11, unmodified. Keep in mind that the "before" case already has been optimized with hand-coded Intel AESNI assembly. The "after" case has combined AES-NI and mode instructions, interleaved 4 blocks at-a-time. « For the first table, lower is better (milliseconds). The first table shows the performance improvement using the Solaris encrypt(1) and decrypt(1) CLI commands. I encrypted and decrypted a 1/2 GByte file on /tmp (swap tmpfs). Encryption improved by about 40% and decryption improved by about 80%. AES-128 is slighty faster than AES-256, as expected. The second table shows more detail timings for CBC, CTR, and ECB modes for the 3 AES key sizes and different data lengths. » The results shown are the percentage improvement as shown by an internal PKCS#11 microbenchmark. And keep in mind the previous baseline code already had optimized AESNI assembly! The keysize (AES-128, 192, or 256) makes little difference in relative percentage improvement (although, of course, AES-128 is faster than AES-256). Larger data sizes show better improvement than 128-byte data. Availability This software is in Solaris 11 FCS. It is available in the 64-bit libcrypto library and the "aes" Solaris kernel module. You must be running hardware that supports AESNI (for example, Intel Westmere and Sandy Bridge, microprocessor architectures). The easiest way to determine if AES-NI is available is with the isainfo(1) command. For example, $ isainfo -v 64-bit amd64 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov amd_sysc cx8 tsc fpu 32-bit i386 applications pclmulqdq aes sse4.2 sse4.1 ssse3 popcnt tscp ahf cx16 sse3 sse2 sse fxsr mmx cmov sep cx8 tsc fpu No special configuration or setup is needed to take advantage of this software. Solaris libraries and kernel automatically determine if it's running on AESNI-capable machines and execute the correctly-tuned software for the current microprocessor. Summary Maximum throughput of AES cipher modes can be achieved by combining AES encryption with modes processing, interleaving encryption of 4 blocks at a time, and using Intel's wide 128-bit %xmm registers and instructions. References "Block cipher modes of operation", Wikipedia Good overview of AES modes (ECB, CBC, CTR, etc.) "Advanced Encryption Standard", Wikipedia "Current Modes" describes NIST-approved block cipher modes (ECB,CBC, CFB, OFB, CCM, GCM)

    Read the article

  • JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g

    - by John-Brown.Evans
    JMS Step 1 - How to Create a Simple JMS Queue in Weblogic Server 11g ol{margin:0;padding:0} .c5{vertical-align:top;width:156pt;border-style:solid;border-color:#000000;border-width:1pt;padding:0pt 2pt 0pt 2pt} .c7{list-style-type:disc;margin:0;padding:0} .c4{background-color:#ffffff} .c14{color:#1155cc;text-decoration:underline} .c6{height:11pt;text-align:center} .c13{color:inherit;text-decoration:inherit} .c3{padding-left:0pt;margin-left:36pt} .c0{border-collapse:collapse} .c12{text-align:center} .c1{direction:ltr} .c8{background-color:#f3f3f3} .c2{line-height:1.0} .c11{font-style:italic} .c10{height:11pt} .c9{font-weight:bold} .title{padding-top:24pt;line-height:1.15;text-align:left;color:#000000;font-size:36pt;font-family:"Arial";font-weight:bold;padding-bottom:6pt}.subtitle{padding-top:18pt;line-height:1.15;text-align:left;color:#666666;font-style:italic;font-size:24pt;font-family:"Georgia";padding-bottom:4pt} li{color:#000000;font-size:10pt;font-family:"Arial"} p{color:#000000;font-size:10pt;margin:0;font-family:"Arial"} h1{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:18pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h2{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:14pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h3{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:12pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h4{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:11pt;font-family:"Arial";padding-bottom:0pt} h5{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-size:10pt;font-family:"Arial";font-weight:normal;padding-bottom:0pt} h6{padding-top:0pt;line-height:1.15;text-align:left;color:#666;font-style:italic;font-size:10pt;font-family:"Arial";padding-bottom:0pt} This example shows the steps to create a simple JMS queue in WebLogic Server 11g for testing purposes. For example, to use with the two sample programs QueueSend.java and QueueReceive.java which will be shown in later examples. Additional, detailed information on JMS can be found in the following Oracle documentation: Oracle® Fusion Middleware Configuring and Managing JMS for Oracle WebLogic Server 11g Release 1 (10.3.6) Part Number E13738-06 http://docs.oracle.com/cd/E23943_01/web.1111/e13738/toc.htm 1. Introduction and Definitions A JMS queue in Weblogic Server is associated with a number of additional resources: JMS Server A JMS server acts as a management container for resources within JMS modules. Some of its responsibilities include the maintenance of persistence and state of messages and subscribers. A JMS server is required in order to create a JMS module. JMS Module A JMS module is a definition which contains JMS resources such as queues and topics. A JMS module is required in order to create a JMS queue. Subdeployment JMS modules are targeted to one or more WLS instances or a cluster. Resources within a JMS module, such as queues and topics are also targeted to a JMS server or WLS server instances. A subdeployment is a grouping of targets. It is also known as advanced targeting. Connection Factory A connection factory is a resource that enables JMS clients to create connections to JMS destinations. JMS Queue A JMS queue (as opposed to a JMS topic) is a point-to-point destination type. A message is written to a specific queue or received from a specific queue. The objects used in this example are: Object Name Type JNDI Name TestJMSServer JMS Server TestJMSModule JMS Module TestSubDeployment Subdeployment TestConnectionFactory Connection Factory jms/TestConnectionFactory TestJMSQueue JMS Queue jms/TestJMSQueue 2. Configuration Steps The following steps are done in the WebLogic Server Console, beginning with the left-hand navigation menu. 2.1 Create a JMS Server Services > Messaging > JMS Servers Select New Name: TestJMSServer Persistent Store: (none) Target: soa_server1  (or choose an available server) Finish The JMS server should now be visible in the list with Health OK. 2.2 Create a JMS Module Services > Messaging > JMS Modules Select New Name: TestJMSModule Leave the other options empty Targets: soa_server1  (or choose the same one as the JMS server)Press Next Leave “Would you like to add resources to this JMS system module” unchecked and  press Finish . 2.3 Create a SubDeployment A subdeployment is not necessary for the JMS queue to work, but it allows you to easily target subcomponents of the JMS module to a single target or group of targets. We will use the subdeployment in this example to target the following connection factory and JMS queue to the JMS server we created earlier. Services > Messaging > JMS Modules Select TestJMSModule Select the Subdeployments  tab and New Subdeployment Name: TestSubdeployment Press Next Here you can select the target(s) for the subdeployment. You can choose either Servers (i.e. WebLogic managed servers, such as the soa_server1) or JMS Servers such as the JMS Server created earlier. As the purpose of our subdeployment in this example is to target a specific JMS server, we will choose the JMS Server option. Select the TestJMSServer created earlier Press Finish 2.4  Create a Connection Factory Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Connection Factory  and Next Name: TestConnectionFactory JNDI Name: jms/TestConnectionFactory Leave the other values at default On the Targets page, select the Advanced Targeting  button and select TestSubdeployment Press Finish The connection factory should be listed on the following page with TestSubdeployment and TestJMSServer as the target. 2.5 Create a JMS Queue Services > Messaging > JMS Modules Select TestJMSModule  and press New Select Queue and Next Name: TestJMSQueueJNDI Name: jms/TestJMSQueueTemplate: NonePress Next Subdeployments: TestSubdeployment Finish The TestJMSQueue should be listed on the following page with TestSubdeployment and TestJMSServer. Confirm the resources for the TestJMSModule. Using the Domain Structure tree, navigate to soa_domain > Services > Messaging > JMS Modules then select TestJMSModule You should see the following resources The JMS queue is now complete and can be accessed using the JNDI names jms/TestConnectionFactory andjms/TestJMSQueue. In the following blog post in this series, I will show you how to write a message to this queue, using the WebLogic sample Java program QueueSend.java.

    Read the article

  • Performing a clean database creation using msbuild

    - by Robert May
    So I’m taking a break from writing about other Agile stuff for a post. :)  I’m still going to get back to the other subjects, but this is fun too. Something I’ve done quite a bit of is MSBuild and CI work.  I’m experimenting with ways to improve what I’ve done in the past, particularly around database CI. Today, I developed a mechanism for starting from scratch with your database.  By scratch, I mean blowing away the existing database and creating it again from a single command line call.  I’m a firm believer that developers should be able to get to a known clean state at the database level with a single command and that they should be operating off of their own isolated database to improve productivity.  These scripts will help that. Here’s how I did it.  First, we have to disconnect users.  I did so using the help of a script from sql server central.  Note that I’m using sqlcmd variable replacement. -- kills all the users in a particular database -- dlhatheway/3M, 11-Jun-2000 declare @arg_dbname sysname declare @a_spid smallint declare @msg varchar(255) declare @a_dbid int set @arg_dbname = '$(DatabaseName)' select @a_dbid = sdb.dbid from master..sysdatabases sdb where sdb.name = @arg_dbname declare db_users insensitive cursor for select sp.spid from master..sysprocesses sp where sp.dbid = @a_dbid open db_users fetch next from db_users into @a_spid while @@fetch_status = 0 begin select @msg = 'kill '+convert(char(5),@a_spid) print @msg execute (@msg) fetch next from db_users into @a_spid end close db_users deallocate db_users GO Once all users are booted from the database, we can commence with recreating the database.  I generated the script that is used to create a database from SQL Server management studio, so I’m only going to show the bits that weren’t generated that are important.  There are a bunch of Alter Database statements that aren’t shown. First, I had to find the default location of the database files in the install, since they can be in many different locations.  I used Method 1 from a technet blog and then modified it a bit to do what I needed to do.  I ended up using dynamic SQL because for the life of me, I couldn’t get the “Filename” property to not return an error when I used anything besides a string.  I’m dropping the database first, if it exists.  Here’s the code:   IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = N'$(DatabaseName)') BEGIN drop database $(DatabaseName) END; go IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; -- Create temp database. Because no options are given, the default data and --- log path locations are used CREATE DATABASE zzTempDBForDefaultPath; DECLARE @Default_Data_Path VARCHAR(512), @Default_Log_Path VARCHAR(512); --Get the default data path SELECT @Default_Data_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 0); --Get the default Log path SELECT @Default_Log_Path = ( SELECT LEFT(physical_name,LEN(physical_name)-CHARINDEX('\',REVERSE(physical_name))+1) FROM sys.master_files mf INNER JOIN sys.[databases] d ON mf.[database_id] = d.[database_id] WHERE d.[name] = 'zzTempDBForDefaultPath' AND type = 1); --Clean up. IF EXISTS(SELECT 1 FROM [master].[sys].[databases] WHERE [name] = 'zzTempDBForDefaultPath') BEGIN DROP DATABASE zzTempDBForDefaultPath END; DECLARE @SQL nvarchar(max) SET @SQL= 'CREATE DATABASE $(DatabaseName) ON PRIMARY ( NAME = N''$(DatabaseName)'', FILENAME = N''' + @Default_Data_Path + N'$(DatabaseName)' + '.mdf' + ''', SIZE = 2048KB , FILEGROWTH = 1024KB ) LOG ON ( NAME = N''$(DatabaseName)Log'', FILENAME = N''' + @Default_Log_Path + N'$(DatabaseName)' + '.ldf' + ''', SIZE = 1024KB , FILEGROWTH = 10%) ' exec (@SQL) GO And with that, your database is created.  You can run these scripts on any server and on any database name.  To do that, I created an MSBuild script that looks like this: <Project xmlns="http://schemas.microsoft.com/developer/msbuild/2003" ToolsVersion="4.0"> <PropertyGroup> <DatabaseName>MyDatabase</DatabaseName> <Server>localhost</Server> <SqlCmd>sqlcmd -v DatabaseName=$(DatabaseName) -S $(Server) -i </SqlCmd> <ScriptDirectory>.\Scripts</ScriptDirectory> </PropertyGroup> <Target Name ="Rebuild"> <ItemGroup> <ScriptFiles Include="$(ScriptDirectory)\*.sql"/> </ItemGroup> <Exec Command="$(SqlCmd) &quot;%(ScriptFiles.Identity)&quot;" ContinueOnError="false"/> </Target> </Project> Note that the Scripts directory is underneath the directory where I’m running the msbuild command and is relative to that directory.  Note also that the target is using batching to run each script in the scripts subdirectory, one after the other.  Each script is passed to the sqlcmd command line execution using the .Identity property on the itemgroup that is created.  This target file is saved in the file “Database.target”. To make this work, you’ll need msbuild in your path, and then run the following command: msbuild database.target /target:Rebuild Once you’ve got your virgin database setup, you’d then need to use a tool like dbdeploy.net to determine that it was a virgin database, build a change script based on the change scripts, and then you’d want another sqlcmd call to update the database with the appropriate scripts.  I’m doing that next, so I’ll post a blog update when I’ve got it working. Technorati Tags: MSBuild,Agile,CI,Database

    Read the article

  • How to configure Multi-tenant plugin as single-tenant with Spring security plugin as resolver?

    - by Fabien Barbier
    I can create a secure, multi-tenant web app with Grails by : setup spring security plugin, setup Multi-tenant plugin (via multi-tenant install and multi-tenant-spring-security) update config.groovy : tenant { mode = "multiTenant" resolver.type = "springSecurity" } add : Integer userTenntId in User domain add a domain class for tenant Organization associate the tenants with Organization Edit BootStrap.groovy. Everything works fine in multi-tenant mode, but how to use mode = "singleTenant" ? This configuration sound not working : tenant { mode = "singleTenant" resolver.type = "springSecurity" } Edit : I try this config : tenant { mode = "singleTenant" resolver.type = "springSecurity" datasourceResolver.type = "config" dataSourceTenantMap { t1 = "jdbc:hsqldb:file:custFoo" t2 = "jdbc:hsqldb:file:custBar" } } But I get : ERROR errors.GrailsExceptionResolver - Executing action [list] of controller [org.example.TicketController] caused exception: java.lang.StackOverflowError and : Caused by: java.lang.StackOverflowError at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.getTenantIdFromSpringSecurity(SpringSecurityCurrentTenant.groovy:50) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.this$2$getTenantIdFromSpringSecurity(SpringSecurityCurrentTenant.groovy) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant$this$2$getTenantIdFromSpringSecurity.callCurrent(Unknown Source) at org.grails.multitenant.springsecurity.SpringSecurityCurrentTenant.get(SpringSecurityCurrentTenant.groovy:41) at com.infusion.tenant.spring.TenantBeanContainer.getBean(TenantBeanContainer.java:53) at com.infusion.tenant.spring.TenantMethodInterceptor.invoke(TenantMethodInterceptor.java:32) at $Proxy14.getConnection(Unknown Source)

    Read the article

  • Developing Schema Compare for Oracle (Part 3): Ghost Objects

    - by Simon Cooper
    In the previous blog post, I covered how we solved the problem of dependencies between objects and between schemas. However, that isn’t the end of the issue. The dependencies algorithm I described works when you’re querying live databases and you can get dependencies for a particular schema direct from the server, and that’s all well and good. To throw a (rather large) spanner in the works, Schema Compare also has the concept of a snapshot, which is a read-only compressed XML representation of a selection of schemas that can be compared in the same way as a live database. This can be useful for keeping historical records or a baseline of a database schema, or comparing a schema on a computer that doesn’t have direct access to the database. So, how do snapshots interact with dependencies? Inter-database dependencies don't pose an issue as we store the dependencies in the snapshot. However, comparing a snapshot to a live database with cross-schema dependencies does cause a problem; what if the live database has a dependency to an object that does not exist in the snapshot? Take a basic example schema, where you’re only populating SchemaA: SOURCE   TARGET (using snapshot) CREATE TABLE SchemaA.Table1 ( Col1 NUMBER REFERENCES SchemaB.Table1(col1));   CREATE TABLE SchemaA.Table1 ( Col1 VARCHAR2(100)); CREATE TABLE SchemaB.Table1 ( Col1 NUMBER PRIMARY KEY);   CREATE TABLE SchemaB.Table1 ( Col1 VARCHAR2(100)); In this case, we want to generate a sync script to synchronize SchemaA.Table1 on the database represented by the snapshot. When taking a snapshot, database dependencies are followed, but because you’re not comparing it to anything at the time, the comparison dependencies algorithm described in my last post cannot be used. So, as you only take a snapshot of SchemaA on the target database, SchemaB.Table1 will not be in the snapshot. If this snapshot is then used to compare against the above source schema, SchemaB.Table1 will be included in the source, but the object will not be found in the target snapshot. This is the same problem that was solved with comparison dependencies, but here we cannot use the comparison dependencies algorithm as the snapshot has not got any information on SchemaB! We've now hit quite a big problem - we’re trying to include SchemaB.Table1 in the target, but we simply do not know the status of this object on the database the snapshot was taken from; whether it exists in the database at all, whether it’s the same as the target, whether it’s different... What can we do about this sorry state of affairs? Well, not a lot, it would seem. We can’t query the original database, as it may not be accessible, and we cannot assume any default state as it could be wrong and break the script (and we currently do not have a roll-back mechanism for failed synchronizes). The only way to fix this properly is for the user to go right back to the start and re-create the snapshot, explicitly including the schemas of these 'ghost' objects. So, the only thing we can do is flag up dependent ghost objects in the UI, and ask the user what we should do with it – assume it doesn’t exist, assume it’s the same as the target, or specify a definition for it. Unfortunately, such functionality didn’t make the cut for v1 of Schema Compare (as this is very much an edge case for a non-critical piece of functionality), so we simply flag the ghost objects up in the sync wizard as unsyncable, and let the user sort out what’s going on and edit the sync script as appropriate. There are some things that we do do to alleviate somewhat this rather unhappy situation; if a user creates a snapshot from the source or target of a database comparison, we include all the objects registered from the database, not just the ones in the schemas originally selected for comparison. This includes any extra dependent objects registered through the comparison dependencies algorithm. If the user then compares the resulting snapshot against the same database they were comparing against when it was created, the extra dependencies will be included in the snapshot as required and everything will be good. Fortunately, this problem will come up quite rarely, and only when the user uses snapshots and tries to sync objects with unknown cross-schema dependencies. However, the solution is not an easy one, and lead to some difficult architecture and design decisions within the product. And all this pain follows from the simple decision to allow schema pre-filtering! Next: why adding a column to a table isn't as easy as you would think...

    Read the article

  • How to remove child(Movieclip) and add to new parent (Movieclip)

    - by vineth
    Hi, I need to remove the child(movieClip) from the parent(movieClip) while dragging and add the same child(movieClip) to another movieclip when dropped. this method is used for dragging function pickUp(event:MouseEvent):void { event.target.startDrag(); } when i drop it function dropIt(event:MouseEvent):void { event.target.parent.removeChild(event.target); //for removing from previous parent clip event.target.dropTarget.parent.addChild(event.target); // add to new Moviclip } But the clip is not visible or not available while dropping... Help me to overcome from this Problem.

    Read the article

  • editing a file with vim that has no EOL marker on the last line but has CRLF line endings

    - by rmeador
    I often have to edit script files, the interpreter for which treats files that have an EOL marker on the last line of the file as an error (i.e. the file is treating CRLF as "newlines", not as "line endings"). Currently, I open these files in Vim using binary mode (-b on the command line). It autodetects the lack of EOL on the final line and sets the "noeol" option appropriately, which prevents it from writing an EOL on the last line. Because the file has CRLF line endings, I get lots of ^Ms at the end of my lines (because it interprets only Unix-style line endings in binary mode, it seems). I can't open it in text mode because the "noeol" option is ignored for non-binary files. This is very annoying, and I always have to remember to manually type the ^M at the end of each line! Is there some way I can force it to accept DOS-style line endings in binary mode, or force it to listen to the EOL option in text mode?

    Read the article

  • Displaying console output?

    - by ClarkeyBoy
    I am currently creating a customer application for a local company. I have a datagridview linked to the customers table, and I am trying to link it up so that updates, inserts and deletions are handled correctly. I am very new to c# so I am starting with the basics (like about 2 days ago I knew nothing - I know vb.net, Java and several other languages though..). Anywho from what I understand anything output through Debug.WriteLine should only appear when in debug mode (common sense really) but anything output through Concole.WriteLine should appear whether or not in debug mode. However I have checked the immediate and output windows and nothing is being output when in normal mode. Does anyone have any idea why this is?? Edit: I have event handlers for clicking a cell - it should output CellClicked and set the gridview to invisible when a cell is clicked. The latter works whichever mode I am in, but CellClicked is only output in debug mode. I am using Console.WriteLine("CellClicked").

    Read the article

  • Can't get my head arround background workers in c#

    - by Connel
    I have wrote an application that syncs two folders together. The problem with the program is that it stops responding whilst copying files. A quick search of stack-overflow told me I need to use something called a background worker. I have read a few pages on the net about this but find it really hard to understand as I'm pretty new to programming. Below is the code for my application - how can I simply put all of the File.Copy(....) commands into their own background worker (if that's even how it works)? Below is the code for the button click event that runs the sub procedure and the sub procedure I wish to use a background worker on all the File.Copy lines. Button event: protected virtual void OnBtnSyncClicked (object sender, System.EventArgs e) { //sets running boolean to true booRunning=true; //sets progress bar to 0 prgProgressBar.Fraction = 0; //resets values used by progressbar dblCurrentStatus = 0; dblFolderSize = 0; //tests if user has entered the same folder for both target and destination if (fchDestination.CurrentFolder == fchTarget.CurrentFolder) { //creates message box MessageDialog msdSame = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync two folders that are the same"); //sets message box title msdSame.Title="Error"; //sets respone type ResponseType response = (ResponseType) msdSame.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdSame.Destroy(); } return; } //tests if user has entered a target folder that is an extension of the destination folder // or if user has entered a desatination folder that is an extension of the target folder if (fchTarget.CurrentFolder.StartsWith(fchDestination.CurrentFolder) || fchDestination.CurrentFolder.StartsWith(fchTarget.CurrentFolder)) { //creates message box MessageDialog msdContains = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync a folder with one of its parent folders"); //sets message box title msdContains.Title="Error"; //sets respone type and runs message box ResponseType response = (ResponseType) msdContains.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdContains.Destroy(); } return; } //gets folder size of target folder FileSizeOfTarget(fchTarget.CurrentFolder); //gets folder size of destination folder FileSizeOfDestination(fchDestination.CurrentFolder); //runs SyncTarget procedure SyncTarget(fchTarget.CurrentFolder); //runs SyncDestination procedure SyncDestination(fchDestination.CurrentFolder); //informs user process is complete prgProgressBar.Text = "Finished"; //sets running bool to false booRunning = false; } Sync sub-procedure: protected void SyncTarget (string strCurrentDirectory) { //string array of all the directories in directory string[] staAllDirectories = Directory.GetDirectories(strCurrentDirectory); //string array of all the files in directory string[] staAllFiles = Directory.GetFiles(strCurrentDirectory); //loop over each file in directory foreach (string strFile in staAllFiles) { //string of just the file's name and not its path string strFileName = System.IO.Path.GetFileName(strFile); //string containing directory in target folder string strDirectoryInsideTarget = System.IO.Path.GetDirectoryName(strFile).Substring(fchTarget.CurrentFolder.Length); //inform user as to what file is being copied prgProgressBar.Text="Syncing " + strFile; //tests if file does not exist in destination folder if (!File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //if file does not exist copy it to destination folder, the true below means overwrite if file already exists File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } //tests if file does exist in destination folder if (File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //long (number) that contains date of last write time of target file long lngTargetFileDate = File.GetLastWriteTime(strFile).ToFileTime(); //long (number) that contains date of last write time of destination file long lngDestinationFileDate = File.GetLastWriteTime(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName).ToFileTime(); //tests if target file is newer than destination file if (lngTargetFileDate > lngDestinationFileDate) { //if it is newer then copy file from target folder to destination folder File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } } //gets current file size FileInfo FileSize = new FileInfo(strFile); //sets file's filesize to dblCurrentStatus and adds it to current total of files dblCurrentStatus = dblCurrentStatus + FileSize.Length; double dblPercentage = dblCurrentStatus/dblFolderSize; prgProgressBar.Fraction = dblPercentage; } //loop over each folder in target folder foreach (string strDirectory in staAllDirectories) { //string containing directories inside target folder but not any higher directories string strDirectoryInsideTarget = strDirectory.Substring(fchTarget.CurrentFolder.Length); //tests if directory does not exist inside destination folder if (!Directory.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget)) { //it directory does not exisit create it Directory.CreateDirectory(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget); } //run sync on all files in directory SyncTarget(strDirectory); } } Any help will be greatly appreciated as after this the program will pretty much be finished :D

    Read the article

  • Can't get my head around background workers in .NET

    - by Connel
    I have wrote an application that syncs two folders together. The problem with the program is that it stops responding whilst copying files. A quick search of stack-overflow told me I need to use something called a background worker. I have read a few pages on the net about this but find it really hard to understand as I'm pretty new to programming. Below is the code for my application - how can I simply put all of the File.Copy(....) commands into their own background worker (if that's even how it works)? Below is the code for the button click event that runs the sub procedure and the sub procedure I wish to use a background worker on all the File.Copy lines. Button event: protected virtual void OnBtnSyncClicked (object sender, System.EventArgs e) { //sets running boolean to true booRunning=true; //sets progress bar to 0 prgProgressBar.Fraction = 0; //resets values used by progressbar dblCurrentStatus = 0; dblFolderSize = 0; //tests if user has entered the same folder for both target and destination if (fchDestination.CurrentFolder == fchTarget.CurrentFolder) { //creates message box MessageDialog msdSame = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync two folders that are the same"); //sets message box title msdSame.Title="Error"; //sets respone type ResponseType response = (ResponseType) msdSame.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdSame.Destroy(); } return; } //tests if user has entered a target folder that is an extension of the destination folder // or if user has entered a desatination folder that is an extension of the target folder if (fchTarget.CurrentFolder.StartsWith(fchDestination.CurrentFolder) || fchDestination.CurrentFolder.StartsWith(fchTarget.CurrentFolder)) { //creates message box MessageDialog msdContains = new MessageDialog(this, DialogFlags.Modal, MessageType.Error, ButtonsType.Close, "You cannot sync a folder with one of its parent folders"); //sets message box title msdContains.Title="Error"; //sets respone type and runs message box ResponseType response = (ResponseType) msdContains.Run(); //if user clicks on close button or closes window then close message box if (response == ResponseType.Close || response == ResponseType.DeleteEvent) { msdContains.Destroy(); } return; } //gets folder size of target folder FileSizeOfTarget(fchTarget.CurrentFolder); //gets folder size of destination folder FileSizeOfDestination(fchDestination.CurrentFolder); //runs SyncTarget procedure SyncTarget(fchTarget.CurrentFolder); //runs SyncDestination procedure SyncDestination(fchDestination.CurrentFolder); //informs user process is complete prgProgressBar.Text = "Finished"; //sets running bool to false booRunning = false; } Sync sub-procedure: protected void SyncTarget (string strCurrentDirectory) { //string array of all the directories in directory string[] staAllDirectories = Directory.GetDirectories(strCurrentDirectory); //string array of all the files in directory string[] staAllFiles = Directory.GetFiles(strCurrentDirectory); //loop over each file in directory foreach (string strFile in staAllFiles) { //string of just the file's name and not its path string strFileName = System.IO.Path.GetFileName(strFile); //string containing directory in target folder string strDirectoryInsideTarget = System.IO.Path.GetDirectoryName(strFile).Substring(fchTarget.CurrentFolder.Length); //inform user as to what file is being copied prgProgressBar.Text="Syncing " + strFile; //tests if file does not exist in destination folder if (!File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //if file does not exist copy it to destination folder, the true below means overwrite if file already exists File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } //tests if file does exist in destination folder if (File.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName)) { //long (number) that contains date of last write time of target file long lngTargetFileDate = File.GetLastWriteTime(strFile).ToFileTime(); //long (number) that contains date of last write time of destination file long lngDestinationFileDate = File.GetLastWriteTime(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName).ToFileTime(); //tests if target file is newer than destination file if (lngTargetFileDate > lngDestinationFileDate) { //if it is newer then copy file from target folder to destination folder File.Copy (strFile, fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget + "/" + strFileName, true); } } //gets current file size FileInfo FileSize = new FileInfo(strFile); //sets file's filesize to dblCurrentStatus and adds it to current total of files dblCurrentStatus = dblCurrentStatus + FileSize.Length; double dblPercentage = dblCurrentStatus/dblFolderSize; prgProgressBar.Fraction = dblPercentage; } //loop over each folder in target folder foreach (string strDirectory in staAllDirectories) { //string containing directories inside target folder but not any higher directories string strDirectoryInsideTarget = strDirectory.Substring(fchTarget.CurrentFolder.Length); //tests if directory does not exist inside destination folder if (!Directory.Exists(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget)) { //it directory does not exisit create it Directory.CreateDirectory(fchDestination.CurrentFolder + "/" + strDirectoryInsideTarget); } //run sync on all files in directory SyncTarget(strDirectory); } } Any help will be greatly appreciated as after this the program will pretty much be finished :D

    Read the article

  • SharePoint 2010 Diagnostic Studio Remote Diag

    - by juanlarios
    I have had some time this week to try out some tools that I have been meaning to try out. This week I am trying out the SP 2010 Diagnostic Studio. I installed it successfully and tried it on my development evironment. I was able to build a report and a snapshot of the environment. I decided to turn my attention to my Employer's intranet environment. This would allow me to analyze it and measure it against benchmarks. I didn't want to install the Diagnostic studio on the Production Envorinment, lucky for me, the Diagnostic studio can be run remotely, well...kind of. Issue My development environment is a stand alone, full installation of SharePoint 2010 Server. It has Office 2010, SQL 2008 Enterprise, a DC...well you get the point, it's jammed packed! But more importantly it's a stand alone, self contained VM environment. Well Microsoft has instructions as to how to connect remotely with Diagnostic Studio here. The deciving part of this is that the SP2010DS prompts you for credentails. So I thought I was getting the right account to run the reports. I tried all the Power Shell commands in the link above but I still ended up getting the following errors: 06/28/2011 12:50:18    Connecting to remote server failed with the following error message : The WinRM client cannot process the request...If the SPN exists, but CredSSP cannot use Kerberos to validate the identity of the target computer and you still want to allow the delegation of the user credentials to the target computer, use gpedit.msc and look at the following policy: Computer Configuration -> Administrative Templates -> System -> Credentials Delegation -> Allow Fresh Credentials with NTLM-only Server Authentication.  Verify that it is enabled and configured with an SPN appropriate for the target computer. For example, for a target computer name "myserver.domain.com", the SPN can be one of the following: WSMAN/myserver.domain.com or WSMAN/*.domain.com. Try the request again after these changes. For more information, see the about_Remote_Troubleshooting Help topic. 06/28/2011 12:54:47    Access to the path '\\<targetserver>\C$\Users\<account logging in>\AppData\Local\Temp' is denied. You might also get an error message like this: The WinRM client cannot process the request. A computer policy does not allow the delegation of the user credentials to the target computer. Explanation After looking at the event logs on the target environment, I noticed that there were a several Security Exceptions. After looking at the specifics around who was denied access, I was able to see the account that was being denied access, it was the client machine administrator account. Well of course that was never going to work!!! After some quick Googling, the last error message above will lead you to edit the Local Group Policy on the client server. And although there are instructions from microsoft around doing this, it really will not work in this scenario. Notice the Description and how it only applices to authentication mentioned? Resolution I can tell you what I did, but I wish there was a better way but I simply don't know if it's duable any other way. Because my development environment had it's own DC, I didn't really want to mess with Kerberos authentication. I would also not be smart to connect that server to the domain, considering it has it's own DC. I ended up installing SharePoint 2010 Diagnostic Studio on another Windows 7 Dev environment I have, and connected the machien to the domain. I ran all the necesary remote credentials commands mentioned here. Those commands add the group policy for you! Once I did this I was able to authenticate properly and I was able to get the reports. Conclusion   You can run SharePoint 2010 Diagnostic Studio Remotely but it will require some specific scenarions. A couple of things I should mention is that as far as I understand, SP2010 DS, will install agents on your target environment to run tests and retrieve the data. I was a Farm Administrator, and also a Server Admin on SharePoint Server. I am not 100% sure if you need all those permissions but I that's just what I have to my internal intranet.   I deally I would like to have a machine that I can have SharePoint 2010 DIagnostic Studio installed and I can run that against client environments. It appears that I will not be able to do that, unless I enable Kerberos on my Windows 7 Machine now. If you have it installed in the same way I would like to have it, please let me know, I'll keep trying to get what I'm after. Hope this helps someone out there doing the same.

    Read the article

  • Java dynamic proxy questions.

    - by Tony
    1.Does dynamic proxy instance subclass the target class? The java doc says the proxy instance implements "a list of interfaces", says nothing about subclassing, but through debugging, I saw that the proxy instance did inherit the target class properites.What does the "a list of interfaces " mean? Can I exclude those interfaces implemented by target class ? 2.Can I invoke target class specific methods on a proxy instance? 3. I think dynamic proxy is an interface methods invocation proxy but rather than a target class proxy, is that right (I am deeply infected by hibernate proxy object concept)?

    Read the article

  • Access Database Connection

    - by Gerory
    I am using C#.net application with oledbConnection to open database in exclusive mode what exactly process in backend when we open database in exclusive mode , it not increse size of database compare to shared mode connectivity. Is it doing Compat or Repair when it close connection of Exclusive Mode?? As we are facing unExpected Errors at time of open connection using Exclusive mode & its come abnormally even we have dispossed all connection properly. Please share ur view ...if it doing compact/reapai at time of closing connection is there a way to wait before opening new Connection?

    Read the article

  • Business Case for investing time developing Stubs and BizUnit Tests

    - by charlie.mott
    I was recently in a position where I had to justify why effort should be spent developing Stubbed Integration Tests for BizTalk solutions. These tests are usually developed using the BizUnit framework. I assumed that most seasoned BizTalk developers would consider this best practice. Even though Microsoft suggest use of BizUnit on MSDN, I've not found a single site listing the justifications for investing time writing stubs and BizUnit tests. Stubs Stubs should be developed to isolate your development team from external dependencies. This is described by Michael Stephenson here. Failing to do this can result in the following problems: In contract-first scenarios, the external system interface will have been defined.  But the interface may not have been setup or even developed yet for the BizTalk developers to work with. By the time you open the target location to see the data BizTalk has sent, it may have been swept away. If you are relying on the UI of the target system to see the data BizTalk has sent, what do you do if it fails to arrive? It may take time for the data to be processed or it may be scheduled to be processed later. Learning how to use the source\target systems and investigations into where things go wrong in these systems will slow down the BizTalk development effort. By the time the data is visible in a UI it may have undergone further transformations. In larger development teams working together, do you all use the same source and target instances. How do you know which data was created by whose tests? How do you know which event log error message are whose?  Another developer may have “cleaned up” your data. It is harder to write BizUnit tests that clean up the data\logs after each test run. What if your B2B partners' source or target system cannot support the sort of testing you want to do. They may not even have a development or test instance that you can work with. Their single test instance may be used by the SIT\UAT teams. There may be licencing costs of setting up an instances of the external system. The stubs I like to use are generic stubs that can accept\return any message type.  Usually I need to create one per protocol. They should be driven by BizUnit steps to: validates the data received; and select a response messages (or error response). Once built, they can be re-used for many integration tests and from project to project. I’m not saying that developers should never test against a real instance.  Every so often, you still need to connect to real developer or test instances of the source and target endpoints\services. The interface developers may ask you to send them some data to see if everything still works.  Or you might want some messages sent to BizTalk to get confidence that everything still works beyond BizTalk. Tests Automated “Stubbed Integration Tests” are usually built using the BizUnit framework. These facilitate testing of the entire integration process from source stub to target stub. It will ensure that all of the BizTalk components are configured together correctly to meet all the requirements. More fine grained unit testing of individual BizTalk components is still encouraged.  But BizUnit provides much the easiest way to test some components types (e.g. Orchestrations). Using BizUnit with the Behaviour Driven Development approach described by Mike Stephenson delivers the following benefits: source: http://biztalkbddsample.codeplex.com – Video 1. Requirements can be easily defined using Given/When/Then Requirements are close to the code so easier to manage as features and scenarios Requirements are defined in domain language The feature files can be used as part of the documentation The documentation is accurate to the build of code and can be published with a release The scenarios are effective to document the scenarios and are not over excessive The scenarios are maintained with the code There’s an abstraction between the intention and implementation of tests making them easier to understand The requirements drive the testing These same tests can also be used to drive load testing as described here. If you don't do this ... If you don't follow the above “Stubbed Integration Tests” approach, the developer will need to manually trigger the tests. This has the following risks: Developers are unlikely to check all the scenarios each time and all the expected conditions each time. After the developer leaves, these manual test steps may be lost. What test scenarios are there?  What test messages did they use for each scenario? There is no mechanism to prove adequate test coverage. A test team may attempt to automate integration test scenarios in a test environment through the triggering of tests from a source system UI. If this is a replacement for BizUnit tests, then this carries the following risks: It moves the tests downstream, so problems will be found later in the process. Testers may not check all the expected conditions within the BizTalk infrastructure such as: event logs, suspended messages, etc. These automated tests may also get in the way of manual tests run on these environments.

    Read the article

  • accessing and modifying tab opened using window.open in google chrome

    - by sonofdelphi
    I used to be able to this to create an exported HTML page containing some data. But the code is not working with the latest version of Google Chrome (It works alright with Chrome 5.0.307.11 beta and all other major browsers). function createExport(text) { var target = window.open(); target.title = 'Memonaut - Exported View'; target.document.open(); target.document.write(text); target.document.close(); } Chrome now complains that the domains don't match and disallows the Javascript calls as unsafe. How can I access and modify the document of a newly opened browser-tab in such a scenario?

    Read the article

  • Is ReaderWriterLockSlim.EnterUpgradeableReadLock() essentially the same as Monitor.Enter()?

    - by Neil Barnwell
    So I have a situation where I may have many, many reads and only the occasional write to a resource shared between multiple threads. A long time ago I read about ReaderWriterLock, and have read about ReaderWriterGate which attempts to mitigate the issue where many writes coming in trump reads and hurt performance. However, now I've become aware of ReaderWriterLockSlim... From the docs, I believe that there can only be one thread in "upgradeable mode" at any one time. In a situation where the only access I'm using is EnterUpgradeableReadLock() (which is appropriate for my scenario) then is there much difference to just sticking with lock(){}? Here's the excerpt: A thread that tries to enter upgradeable mode blocks if there is already a thread in upgradeable mode, if there are threads waiting to enter write mode, or if there is a single thread in write mode. Or, does the recursion policy make any difference to this?

    Read the article

  • Passing Data thru NSTimer UserInfo

    - by zorro2b
    I a trying to pass data thru userInfo for an NSTimer call. What is the best way to do this? I am trying to use an NSDictionary, this is simple enough when I have objective c objects, but what about other data? I want to do something like this, which doesn't work as is: - (void) play:(SystemSoundID)sound target:(id)target callbackSelector:(SEL)selector { NSLog(@"pause ipod"); [iPodController pause]; theSound = sound; NSMutableDictionary *cb = [[NSMutableDictionary alloc] init]; [cb setObject:(id)&sound forKey:@"sound"]; [cb setObject:target forKey:@"target"]; [cb setObject:(id)&selector forKey:@"selector"]; [NSTimer scheduledTimerWithTimeInterval:0 target:self selector: @selector(notifyPause1:) userInfo:(id)cb repeats:NO]; }

    Read the article

  • Oracle SQL Update query takes days to update

    - by B Senthil Kumar
    I am trying to update a record in the target table based on the record coming in from source. For instance, if the incoming record is present in the target table I would update them in the target else I would simply insert. I have over one million records in my source while my target has 46 million records. The target table is partitioned based on calendar key. I implement this whole logic using Informatica. I find that the Informatica code is perfectly fine looking at the Informatica session log but its in the update it takes long time (more than 5 days to update one million records). Any suggestions as to what can be done on the scenario to improve the performance?

    Read the article

  • How do you do masked password entry on windows using character overwriting?

    - by Erick
    Currently I am using this implementation to hide user input during password entry: void setEcho(bool enable) { HANDLE hStdin = GetStdHandle(STD_INPUT_HANDLE); DWORD mode; GetConsoleMode(hStdin, &mode); if(enable) { mode |= ENABLE_ECHO_INPUT; } else { mode &= ~ENABLE_ECHO_INPUT; } SetConsoleMode(hStdin, mode); } The user needs to be able to have positive feed back that text entry is being made. What techniques are available in a Win32 environment using C++?

    Read the article

< Previous Page | 95 96 97 98 99 100 101 102 103 104 105 106  | Next Page >