Search Results

Search found 17063 results on 683 pages for 'window restore'.

Page 679/683 | < Previous Page | 675 676 677 678 679 680 681 682 683  | Next Page >

  • Explain to me the following VS 2010 Extension Sample code..

    - by ealshabaan
    Coders, I am building a VS 2010 extension and I am experimenting around some of the samples that came with the VS 2010 SDK. One of the sample projects is called TextAdornment. In that project there is a weirdo class that looks like the following: [Export(typeof(IWpfTextViewCreationListener))] [ContentType("text")] [TextViewRole(PredefinedTextViewRoles.Document)] internal sealed class TextAdornment1Factory : IWpfTextViewCreationListener While I was experimenting with this project, I tried to debug the project to see the flow of the program and I noticed that this class gets hit when I first start the debugging. Now my question is the following: what makes this class being the first class to get called when VS starts? In other words, why this class gets active and it runs as of some code instantiate an object of this class type? Here is the only two files in the sample project: TextAdornment1Factory.cs using System.ComponentModel.Composition; using Microsoft.VisualStudio.Text.Editor; using Microsoft.VisualStudio.Utilities; namespace TextAdornment1 { #region Adornment Factory /// /// Establishes an to place the adornment on and exports the /// that instantiates the adornment on the event of a 's creation /// [Export(typeof(IWpfTextViewCreationListener))] [ContentType("text")] [TextViewRole(PredefinedTextViewRoles.Document)] internal sealed class TextAdornment1Factory : IWpfTextViewCreationListener { /// /// Defines the adornment layer for the adornment. This layer is ordered /// after the selection layer in the Z-order /// [Export(typeof(AdornmentLayerDefinition))] [Name("TextAdornment1")] [Order(After = PredefinedAdornmentLayers.Selection, Before = PredefinedAdornmentLayers.Text)] [TextViewRole(PredefinedTextViewRoles.Document)] public AdornmentLayerDefinition editorAdornmentLayer = null; /// <summary> /// Instantiates a TextAdornment1 manager when a textView is created. /// </summary> /// <param name="textView">The <see cref="IWpfTextView"/> upon which the adornment should be placed</param> public void TextViewCreated(IWpfTextView textView) { new TextAdornment1(textView); } } #endregion //Adornment Factory } TextAdornment1.cs using System.Windows; using System.Windows.Controls; using System.Windows.Media; using Microsoft.VisualStudio.Text; using Microsoft.VisualStudio.Text.Editor; using Microsoft.VisualStudio.Text.Formatting; namespace TextAdornment1 { /// ///TextAdornment1 places red boxes behind all the "A"s in the editor window /// public class TextAdornment1 { IAdornmentLayer _layer; IWpfTextView _view; Brush _brush; Pen _pen; ITextView textView; public TextAdornment1(IWpfTextView view) { _view = view; _layer = view.GetAdornmentLayer("TextAdornment1"); textView = view; //Listen to any event that changes the layout (text changes, scrolling, etc) _view.LayoutChanged += OnLayoutChanged; _view.Closed += new System.EventHandler(_view_Closed); //selectedText(); //Create the pen and brush to color the box behind the a's Brush brush = new SolidColorBrush(Color.FromArgb(0x20, 0x00, 0x00, 0xff)); brush.Freeze(); Brush penBrush = new SolidColorBrush(Colors.Red); penBrush.Freeze(); Pen pen = new Pen(penBrush, 0.5); pen.Freeze(); _brush = brush; _pen = pen; } void _view_Closed(object sender, System.EventArgs e) { MessageBox.Show(textView.Selection.IsEmpty.ToString()); } /// <summary> /// On layout change add the adornment to any reformatted lines /// </summary> private void OnLayoutChanged(object sender, TextViewLayoutChangedEventArgs e) { foreach (ITextViewLine line in e.NewOrReformattedLines) { this.CreateVisuals(line); } } private void selectedText() { } /// <summary> /// Within the given line add the scarlet box behind the a /// </summary> private void CreateVisuals(ITextViewLine line) { //grab a reference to the lines in the current TextView IWpfTextViewLineCollection textViewLines = _view.TextViewLines; int start = line.Start; int end = line.End; //Loop through each character, and place a box around any a for (int i = start; (i < end); ++i) { if (_view.TextSnapshot[i] == 'a') { SnapshotSpan span = new SnapshotSpan(_view.TextSnapshot, Span.FromBounds(i, i + 1)); Geometry g = textViewLines.GetMarkerGeometry(span); if (g != null) { GeometryDrawing drawing = new GeometryDrawing(_brush, _pen, g); drawing.Freeze(); DrawingImage drawingImage = new DrawingImage(drawing); drawingImage.Freeze(); Image image = new Image(); image.Source = drawingImage; //Align the image with the top of the bounds of the text geometry Canvas.SetLeft(image, g.Bounds.Left); Canvas.SetTop(image, g.Bounds.Top); _layer.AddAdornment(AdornmentPositioningBehavior.TextRelative, span, null, image, null); } } } } } }

    Read the article

  • Sending mail from html through php will return to php page. It should be in the same contact form. How to do it?

    - by Ershad
    Hi, I have a html page with all images and other text. Below i have the contact form. i am new to php. I have used php to send mail. But when i click on send button,it is going to backend.php. And i have a blank page. What i need is it should in the same page. I have done some javascript to show and hide div's. With this i am showing a thank u msg in the same page. But i dont want to redirect browser to show it is going to backend.php.TO be in the same page i have redirected php to another window. My code is - backend.php <?php if($_POST['message']) { $body = "Name: ".$_POST['name']; $body .= "<br>Email: ".$_POST['email']; $body .= "<br>Phone/Mobile: ".$_POST['phone'] $body .= "<br>Address: ".$_POST['address'] $body .= "<br>Message: ".$_POST['msg'] if(mail("[email protected]","Subjects",$body)) echo 'true'; else echo 'false;'; } ?> form code - <form onsubmit="return validateFormOnSubmit(this)" action="backend.php" method="post" target="_blank"> <table align="left"> <tbody> <tr> <td align="left"><label for="name">Enter your name:</label> <label style="color:red;display:none;font-size:18px;" Id="namestar">*</label></td> </tr> <tr><td align="left"><input Id="name" name="name" style="width:515px;" size="35" maxlength="50" type="text"></td></tr> <tr ><td height="12px"/></tr> <tr> <td align="left"><label for="email">Email Address:</label> <label style="color:red;display:none;font-size:18px;" Id="emailstar">*</label></td> </tr> <tr><td align="left"><input Id="email" name="email" style="width:515px;" type="text"></td></tr> <tr><td height="12px"/></tr> <tr> <td align="left"><label for="phone">Phone/Mobile:</label><label style="color:red;font-size:18px;display:none;" Id="phonestar">*</label></td> </tr> <tr><td align="left"><input Id="phone" name="phone" style="width:515px;" type="text"></td></tr> <tr><td height="12px"/></tr> <tr> <td align="left"><label for="address">Your Address:</label><label style="color:red;font-size:18px;display:none;" Id="addressstar">*</label></td> </tr> <tr><td align="left"><input id="address" name="address" style="width:515px;" type="text"></td></tr> <tr><td height="12px"/></tr> <tr> <td align="left"><label for="msg">Enter Your Comments / Suggestions / Enquiries</label><label style="color:red;font-size:18px;display:none;" Id="messagestar">*</label></td> </tr> <tr><td align="left"><textarea id="msg" name="msg" style="width:515px;height:100px;" type="text"></textarea></td></tr> <tr><td height="12px"/></tr> <tr> <td align="right"><table align="left"> <tbody> <tr > <td style="width:160px;"/> <td style="width:250px;" ><label Id="mandatory" style="display:none;color:red;" >* Mandatory fields</label></td> <td ><input type="submit" name="send" value="send" /></td><td > <input type="reset" name="reset" value="reset" onclick="resetAll()" /></td></tr> </tbody> </table> </td> </tr> </tbody> </table>

    Read the article

  • files get uploaded just before they get cancelled

    - by user1763986
    Got a little situation here where I am trying to cancel a file's upload. What I have done is stated that if the user clicks on the "Cancel" button, then it will simply remove the iframe so that it does not go to the page where it uploads the files into the server and inserts data into the database. Now this works fine if the user clicks on the "Cancel" button in quickish time the problem I have realised though is that if the user clicks on the "Cancel" button very late, it sometimes doesn't remove the iframe in time meaning that the file has just been uploaded just before the user has clicked on the "Cancel" button. So my question is that is there a way that if the file does somehow get uploaded before the user clicks on the "Cancel" button, that it deletes the data in the database and removes the file from the server? Below is the image upload form: <form action="imageupload.php" method="post" enctype="multipart/form-data" target="upload_target_image" onsubmit="return imageClickHandler(this);" class="imageuploadform" > <p class="imagef1_upload_process" align="center"> Loading...<br/> <img src="Images/loader.gif" /> </p> <p class="imagef1_upload_form" align="center"> <br/> <span class="imagemsg"></span> <label>Image File: <input name="fileImage" type="file" class="fileImage" /></label><br/> <br/> <label class="imagelbl"><input type="submit" name="submitImageBtn" class="sbtnimage" value="Upload" /></label> </p> <p class="imagef1_cancel" align="center"> <input type="reset" name="imageCancel" class="imageCancel" value="Cancel" /> </p> <iframe class="upload_target_image" name="upload_target_image" src="#" style="width:0px;height:0px;border:0px;solid;#fff;"></iframe> </form> Below is the jquery function which controls the "Cancel" button: $(imageuploadform).find(".imageCancel").on("click", function(event) { $('.upload_target_image').get(0).contentwindow $("iframe[name='upload_target_image']").attr("src", "javascript:'<html></html>'"); return stopImageUpload(2); }); Below is the php code where it uploads the files and inserts the data into the database. The form above posts to this php page "imageupload.php": <body> <?php include('connect.php'); session_start(); $result = 0; //uploads file move_uploaded_file($_FILES["fileImage"]["tmp_name"], "ImageFiles/" . $_FILES["fileImage"]["name"]); $result = 1; //set up the INSERT SQL query command to insert the name of the image file into the "Image" Table $imagesql = "INSERT INTO Image (ImageFile) VALUES (?)"; //prepare the above SQL statement if (!$insert = $mysqli->prepare($imagesql)) { // Handle errors with prepare operation here } //bind the parameters (these are the values that will be inserted) $insert->bind_param("s",$img); //Assign the variable of the name of the file uploaded $img = 'ImageFiles/'.$_FILES['fileImage']['name']; //execute INSERT query $insert->execute(); if ($insert->errno) { // Handle query error here } //close INSERT query $insert->close(); //Retrieve the ImageId of the last uploded file $lastID = $mysqli->insert_id; //Insert into Image_Question Table (be using last retrieved Image id in order to do this) $imagequestionsql = "INSERT INTO Image_Question (ImageId, SessionId, QuestionId) VALUES (?, ?, ?)"; //prepare the above SQL statement if (!$insertimagequestion = $mysqli->prepare($imagequestionsql)) { // Handle errors with prepare operation here echo "Prepare statement err imagequestion"; } //Retrieve the question number $qnum = (int)$_POST['numimage']; //bind the parameters (these are the values that will be inserted) $insertimagequestion->bind_param("isi",$lastID, 'Exam', $qnum); //execute INSERT query $insertimagequestion->execute(); if ($insertimagequestion->errno) { // Handle query error here } //close INSERT query $insertimagequestion->close(); ?> <!--Javascript which will output the message depending on the status of the upload (successful, failed or cancelled)--> <script> window.top.stopImageUpload(<?php echo $result; ?>, '<?php echo $_FILES['fileImage']['name'] ?>'); </script> </body> UPDATE: Below is the php code "cancelimage.php" where I want to delete the cancelled file from the server and delete the record from the database. It is set up but not finished, can somebody finish it off so I can retrieve the name of the file and it's id using $_SESSION? <?php // connect to the database include('connect.php'); /* check connection */ if (mysqli_connect_errno()) { printf("Connect failed: %s\n", mysqli_connect_error()); die(); } //remove file from server unlink("ImageFiles/...."); //need to retrieve file name here where the ... line is //DELETE query statement where it will delete cancelled file from both Image and Image Question Table $imagedeletesql = " DELETE img, img_q FROM Image AS img LEFT JOIN Image_Question AS img_q ON img_q.ImageId = img.ImageId WHERE img.ImageFile = ?"; //prepare delete query if (!$delete = $mysqli->prepare($imagedeletesql)) { // Handle errors with prepare operation here } //Dont pass data directly to bind_param store it in a variable $delete->bind_param("s",$img); //execute DELETE query $delete->execute(); if ($delete->errno) { // Handle query error here } //close query $delete->close(); ?> Can you please provide an sample code in your answer to make it easier for me. Thank you

    Read the article

  • tables wrapping to next line when width 100%

    - by jmo
    I'm encountering some weirdness with tables in css. The layout is fairly simple, a fixed-width nav bar on the left and the content on the right. When the content includes a table with a width of 100% the table ends up getting pushed down until it has room to take up the full width of the screen (instead of just the area to the right of the nav bar). If I remove the width=100% from the table's css, then it looks fine, but obviously the table doesn't grow to fill the space of the div. The problem is that i want the table to grow and shrink with the window but still stay in the bounds of its div. Thanks. Here's a simple example: <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.01 Transitional//EN" "http://www.w3.org/TR/html4/loose.dtd"> <html> <head> <title>Test</title> <style type="text/css"> #content { padding-right:20px; background:white; overflow:hidden; margin:20px; } #content .column { position:relative; padding-bottom: 20010px; margin-bottom: -20000px; } #center { width:100%; padding-top:15px; } body { min-width:700px; } #left { width: 330px; padding: 0 10px; padding-top:10px; float:left; } .tableData { width:100%; } </style> </head> <body> <div id="content"> <div class="column" id="left"> <div> Some text goes in here<br/> some more text<br/> some more text<br/> some more text<br/> some more text<br/> some more text<br/> </div> </div> <div class="column" id="center"> Some text at the top; <hr/> <table class="tableData"> <thead> <tr><th>A</th><th>B</th><th>C</th></tr> </thead> <tbody> <tr> <td>A1 A1 A1 A1</td> <td>B1 B1 B1 B1</td> <td>C1 C1 C1 C1 C</td> </tr> <tr> <td>A2 A2 A2 A2 </td> <td>B2 B2 B2 B2 </td> <td>C2 C2 C2 C2</td> </tr> <tr> <td>A3 A3 A3 A3 A3 </td> <td>B3 B3 B3 B3 B3 </td> <td>C3 C3 C3 C3 C3</td> </tr> <tr> <td>A4 A4 A4 A4 A4</td> <td>B4 B4 B4 B4 B4</td> <td>C4 C4 C4 C4 C4</td> </tr> </tbody> </table> </div> </div> </body> </html>

    Read the article

  • Why does DEP kill IE when accessing Microsoft FTP?

    - by Sammy
    I start up IE (9.0.8112.16421) with about:blank and I go to ftp://ftp.microsoft.com/ I press Alt, click View and then Open FTP Site in Windows Explorer. At this point IE stops responding and eventually crashes (though the window is still active, sometimes) and I get the usual Windows dialog box saying that the program has stopped working. From this dialog box I click on the option to try to find solutions to the problem and the progress bar just keeps scrolling without giving me any result page whatsoever, so I have to abort by clicking Cancel. Then I get the bubble type of pop-up message from the system tray saying that DEP has stopped the program from executing. What gives? Why would DEP (part of Microsoft Windows) be preventing IE (a Microsoft product) from performing a perfectly legitimate action from Microsoft's own FTP site? The OS is Windows Vista HP SP2, Swedish locale. Screenshots as follows... Update: I normally have UAC disabled, but I have discovered that enabling it has an effect on IE when I click the FTP option from the View menu, just as I suspected. I basically tried starting IE in its 32-bit and 64-bit version, with and without add-ons, and switching UAC on and off, and then trying to go to View and the FTP option (as shown above). Here are the results. With UAC off and DEP on Action: IE 32-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: crash Action: IE 32-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: crash Action: IE 64-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: information & warning message Action: IE 64-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: information & warning message This is the information and warning message I get if I use IE 64-bit: The first message is an FTP proxy warning. It says that the folder ftp://ftp.microsoft.com/ will be write-protected because proxy server is not configured to allow full access. It goes on to say that if I want to move, paste, change name or delete files I must use another type of proxy, and that I should contact the system admin for more information (the usual recommendation when they have no clue of what's going on). What the heck is all this about? I don't even use a proxy server, as you can see from the next screenshot (Internet Options, Connections, LAN settings dialog). That second message only states that the FTP site cannot be viewed in (Windows) Explorer. With UAC off, I always get these two messages when running the 64-bit version of IE. With UAC on and DEP on Action: IE 32-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: crash Action: IE 32-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: security warning message, prompts to allow action Action: IE 64-bit, normal start, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: security warning message, prompts to allow action Action: IE 64-bit, extoff, go to ftp://ftp.microsoft.com/, view menu, FTP option. Result: security warning message, prompts to allow action As you can see from this list, if I have UAC enabled I actually get rid of these messages and opening the FTP site in Windows Explorer (from IE) actually works (except for 32-bit version which still crashes). Here is the security warning message: The fact that the 32-bit IE still crashes could be an indicator that this has something to do with one or several add-ons in that bit-version of IE. The 32-bit IE doesn't crash if it's started with the extoff flag. If this is affecting only the 32-bit IE then it's only normal that the 64-bit IE doesn't have this problem because it would not be using any of the add-ons used by the 32-bit version, they are not compatible with 64-bit (although some add-ons work both with 32-bit and 64-bit IE). Figuring out which add-on (if any) is causing this problem is a whole new question... but I seem to be closer to an answer now, and a possible solution. I could of course just add IE (32-bit) in the exclusion list of DEP. In fact, I have already tested this and it causes IE to perform this task without hiccups. But I don't really want to disable DEP, or force it on all Windows programs and services (except the ones I strictly specify in the exception list). (In other words DEP can't really be completely disabled, you can only switch between two modes of operation.) Update 2: This is interesting... I start 32-bit IE, go to ftp://ftp.microsoft.com/ and click on View, and Open FTP Site in Windows Explorer. The result is a crash!! Then I start 32-bit IE with extoff flag to disable add-ons, I go to ftp://ftp.microsoft.com/ and click on View, and Open FTP Site in Windows Explorer. I get the security warning, as expected with UAC enabled, and it opens up in Windows Explorer. Now... I close Windows Explorer, and I close IE. I then start 32-bit IE (normal start, with add-ons), I go to ftp://ftp.microsoft.com/ and click on View, and Open FTP Site in Windows Explorer. Now this time it doesn't crash! Instead, I get the screenshot number 5 as seen above. This is the FTP proxy warning message. Now get this... if I click the close button to get rid of this message, what happens is that Firefox starts up, and it goes to ftp://ftp.microsoft.com/ The fact that this works with 32-bit IE (with add-ons) the second time around, is because I am still logged in as anonymous to the FTP server. The log-in has not timed out yet. Standard log-in timeout for FTP servers is usually 60 to 120 seconds. I got logged in to it the first time I ran 32-bit IE with the extoff flag (no add-ons) which actually works and connects using Windows Explorer. Update 3: The connection to the FTP server has timed out by now. So now if I run 32-bit IE (with add-ons) and repeat the steps as before it crashes, just as expected... In conclusion: If I have already been connected to the FTP server via Windows Explorer, and I go to this FTP address in 32-bit IE and I pick the FTP option from the view menu to open it in Windows Explorer, it gives me a FTP proxy server warning and then opens the address in default web browser (Firefox in my case). If I have not been connected to the FTP server via Windows Explorer previously, and I go to this FTP address in 32-bit IE and I pick the FTP option from the view menu top open it in Windows Explorer, then it crashes IE! This is just great... It's not that I care much for using Internet Explorer or the Windows Explorer to log in to FTP servers. This just shows why IE is not the best browser choice. This reminds me of the time when Microsoft was enforcing the use of Internet Explorer as default browser for opening web links and other web resources, despite the fact that the user had installed an alternative browser on the system. Even if the user explicitly set the default browser to be something else and not Internet Explorer in the Windows options, IE would still pop up sometimes, depending on what web resources the user was trying to access. Setting default browser had no effect. It was hard-coded that IE is the browser of choice, especially when accessing Microsoft product or help pages. The web page would actually say that you are not using IE, and that you must open it in IE to view it. Unfortunately you would not be able to open it manually in a different browser by simply copying and pasting the URL from the address bar, because it would show a different URL, and the original URL would re-direct to the "you are using the wrong browser" page so you would not have the time to cut it to clipboard. Thankfully those days are over. Now-days Microsoft is forced to distribute IE and WMP free versions of Windows for the EU market. The way it should be! These programs have to be optional, not mandatory.

    Read the article

  • Why does Mac OS X Software Update not work when machine uses Active Directory?

    - by Lyndsey Ferguson
    My company's IT department is mostly a Windows run operation and in order to become more secure, they are altering the way that the Macintosh computers login to our internal network so that they use Active Directory like their Windows counterparts. I have been given Administrative permission on my Mac and I am able to do most of what I used to be able to do in terms of authentication of software installations. However, there is a problem: the "Software Update" feature doesn't work. What happens is that when I try to get the Mac to perform its Software Updates from the Apple menu, the normal window appears listing what has to be updated; I am able to select what to update and click the "Update" button, but then nothing happens. It doesn't ask for authentication like it used to, the computer doesn't perform any download or installation (it does sometimes ask me to agree to license agreements for iTunes). I can download the updates individually and install them without any issues, but the auto-update fails. I'd rather use the Software Update menu item like I used to: it is much more convenient. Any suggestions on how I can fix this? EDIT Nov 19th, 2009, 10:09 EST: I have posted this question to the Apple Mac OS X Snow Leopard support forum. EDIT Nov 19th, 2009, 12:39 EST:Yes, the Terminal command "sudo softwareupdate --install --all" does work flawlessly. I want to avoid that as my co-workers are generally not comfortable on the Mac. I also tried Chealion's suggestion to delete "~/Library/Preferences/com.apple.SoftwareUpdate.plist" and "/Library/Preferences/com.apple.SoftwareUpdate.plist", Software Update still fails. However, I did get diagnostic messages in the Console (below). I've deleted the MS Office Package Receipts and examined the suhelperd (Software Update Helper Daemon?); it appears that suhelperd is crashing and that explains why it doesn't work. I've submitted a bug report to Apple (radar://7408619). Here are the Console diagnostic messages: 11/19/09 12:36:44 PM com.apple.suhelperd[66829] terminate called after throwing an instance of 'NSException' 11/19/09 12:36:47 PM com.apple.launchd[1] (com.apple.suhelperd[66829]) Job appears to have crashed: Abort trap 11/19/09 12:36:48 PM com.apple.ReportCrash.Root[66830] 2009-11-19 12:36:48.275 ReportCrash[66830:2703] Saved crash report for suhelperd[66829] version ??? (???) to /Library/Logs/DiagnosticReports/suhelperd_2009-11-19-123648_localhost.crash 11/19/09 12:36:54 PM com.apple.launchd[1] (com.apple.suhelperd) Throttling respawn: Will start in 1 seconds 11/19/09 12:36:55 PM com.apple.suhelperd[66836] terminate called after throwing an instance of 'NSException' 11/19/09 12:36:55 PM com.apple.launchd[1] (com.apple.suhelperd[66836]) Job appears to have crashed: Abort trap 11/19/09 12:36:56 PM com.apple.ReportCrash.Root[66830] 2009-11-19 12:36:56.017 ReportCrash[66830:2f03] Saved crash report for suhelperd[66836] version ??? (???) to /Library/Logs/DiagnosticReports/suhelperd_2009-11-19-123655_localhost.crash 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_automator.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_automator_workflow.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_autoupdate.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_clipart.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_core.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_dock.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_entourage.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_entourage_help_std.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_equationeditor.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_errorreporting.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_excel.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_excel_help_std.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_fonts.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_graph.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_helpviewer.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_launch.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_ooxml.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_orgchart.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_powerpoint.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_powerpoint_help_std.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_brazilian.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_danish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_dutch.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_english.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_finnish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_french.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_german.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_italian.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_japanese.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_norwegian.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_portuguese.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_spanish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_proofing_swedish.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_required.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_silverlight.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_sounds.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_word.pkg 11/19/09 12:36:58 PM Software Update[66826] PackageKit: *** Missing bundle identifier: /Library/Receipts/Office2008_en_word_help_std.pkg 11/19/09 12:37:26 PM com.apple.suhelperd[66839] terminate called after throwing an instance of 'NSException' 11/19/09 12:37:26 PM com.apple.launchd[1] (com.apple.suhelperd[66839]) Job appears to have crashed: Abort trap 11/19/09 12:37:26 PM com.apple.ReportCrash.Root[66830] 2009-11-19 12:37:26.929 ReportCrash[66830:2b07] Saved crash report for suhelperd[66839] version ??? (???) to /Library/Logs/DiagnosticReports/suhelperd_2009-11-19-123726_localhost.crash And here is the suhelperd crash report: Process: suhelperd [66839] Path: /System/Library/PrivateFrameworks/SoftwareUpdate.framework/Versions/A/Resources/suhelperd Identifier: suhelperd Version: ??? (???) Code Type: X86-64 (Native) Parent Process: launchd [1] Date/Time: 2009-11-19 12:37:26.473 -0500 OS Version: Mac OS X 10.6.2 (10C540) Report Version: 6 Exception Type: EXC_CRASH (SIGABRT) Exception Codes: 0x0000000000000000, 0x0000000000000000 Crashed Thread: 0 Dispatch queue: com.apple.main-thread Application Specific Information: abort() called *** Terminating app due to uncaught exception 'NSRangeException', reason: '*** -[NSCFArray objectAtIndex:]: index (0) beyond bounds (0)' *** Call stack at first throw: ( 0 CoreFoundation 0x00007fff859a9444 __exceptionPreprocess + 180 1 libobjc.A.dylib 0x00007fff8787e0f3 objc_exception_throw + 45 2 CoreFoundation 0x00007fff859a9267 +[NSException raise:format:arguments:] + 103 3 CoreFoundation 0x00007fff859a91f4 +[NSException raise:format:] + 148 4 Foundation 0x00007fff855da080 _NSArrayRaiseBoundException + 122 5 Foundation 0x00007fff8553cb81 -[NSCFArray objectAtIndex:] + 75 6 Admin 0x00007fff8107920e +[User(UserPrivate) _userWithInfo:attributes:] + 71 7 Admin 0x00007fff81080d6b +[User findUserByID:searchParent:] + 404 8 suhelperd 0x0000000100001274 0x0 + 4294972020 9 suhelperd 0x0000000100002240 0x0 + 4294976064 10 suhelperd 0x00000001000053b1 0x0 + 4294988721 11 suhelperd 0x00000001000044b3 0x0 + 4294984883 12 suhelperd 0x0000000100004154 0x0 + 4294984020 13 libSystem.B.dylib 0x00007fff83eb60d8 mach_msg_server + 357 14 suhelperd 0x00000001000036eb 0x0 + 4294981355 15 suhelperd 0x0000000100002a1f 0x0 + 4294978079 16 suhelperd 0x0000000100001080 0x0 + 4294971520 ) Thread 0 Crashed: Dispatch queue: com.apple.main-thread 0 libSystem.B.dylib 0x00007fff83e86fe6 __kill + 10 1 libSystem.B.dylib 0x00007fff83f27e32 abort + 83 2 libstdc++.6.dylib 0x00007fff873cf5d2 __tcf_0 + 0 3 libobjc.A.dylib 0x00007fff87881d29 _objc_terminate + 100 4 libstdc++.6.dylib 0x00007fff873cdae1 __cxxabiv1::__terminate(void (*)()) + 11 5 libstdc++.6.dylib 0x00007fff873cdb16 __cxxabiv1::__unexpected(void (*)()) + 0 6 libstdc++.6.dylib 0x00007fff873cdbfc __gxx_exception_cleanup(_Unwind_Reason_Code, _Unwind_Exception*) + 0 7 libobjc.A.dylib 0x00007fff8787e192 object_getIvar + 0 8 com.apple.CoreFoundation 0x00007fff859a9267 +[NSException raise:format:arguments:] + 103 9 com.apple.CoreFoundation 0x00007fff859a91f4 +[NSException raise:format:] + 148 10 com.apple.Foundation 0x00007fff855da080 _NSArrayRaiseBoundException + 122 11 com.apple.Foundation 0x00007fff8553cb81 -[NSCFArray objectAtIndex:] + 75 12 com.apple.framework.Admin 0x00007fff8107920e +[User(UserPrivate) _userWithInfo:attributes:] + 71 13 com.apple.framework.Admin 0x00007fff81080d6b +[User findUserByID:searchParent:] + 404 14 suhelperd 0x0000000100001274 0x100000000 + 4724 15 suhelperd 0x0000000100002240 0x100000000 + 8768 16 suhelperd 0x00000001000053b1 0x100000000 + 21425 17 suhelperd 0x00000001000044b3 0x100000000 + 17587 18 suhelperd 0x0000000100004154 0x100000000 + 16724 19 libSystem.B.dylib 0x00007fff83eb60d8 mach_msg_server + 357 20 suhelperd 0x00000001000036eb 0x100000000 + 14059 21 suhelperd 0x0000000100002a1f 0x100000000 + 10783 22 suhelperd 0x0000000100001080 0x100000000 + 4224 Thread 1: Dispatch queue: com.apple.libdispatch-manager 0 libSystem.B.dylib 0x00007fff83e51bba kevent + 10 1 libSystem.B.dylib 0x00007fff83e53a85 _dispatch_mgr_invoke + 154 2 libSystem.B.dylib 0x00007fff83e5375c _dispatch_queue_invoke + 185 3 libSystem.B.dylib 0x00007fff83e53286 _dispatch_worker_thread2 + 244 4 libSystem.B.dylib 0x00007fff83e52bb8 _pthread_wqthread + 353 5 libSystem.B.dylib 0x00007fff83e52a55 start_wqthread + 13 Thread 2: 0 libSystem.B.dylib 0x00007fff83e529da __workq_kernreturn + 10 1 libSystem.B.dylib 0x00007fff83e52dec _pthread_wqthread + 917 2 libSystem.B.dylib 0x00007fff83e52a55 start_wqthread + 13 Thread 0 crashed with X86 Thread State (64-bit): rax: 0x0000000000000000 rbx: 0x00007fff707d7298 rcx: 0x00007fff5fbff868 rdx: 0x0000000000000000 rdi: 0x0000000000010517 rsi: 0x0000000000000006 rbp: 0x00007fff5fbff880 rsp: 0x00007fff5fbff868 r8: 0x00007fff707da9e0 r9: 0x0000000000000063 r10: 0x00007fff83e83026 r11: 0x0000000000000202 r12: 0x00007fff85a2dca1 r13: 0x0000000000000000 r14: 0x00007fff70bea228 r15: 0x00007fff5fbffb10 rip: 0x00007fff83e86fe6 rfl: 0x0000000000000202 cr2: 0x00007fff70e3afd0

    Read the article

  • Injection of banners in my webbrowsers possible malware

    - by Skadlig
    Recently I have started to suspect that I have some kind of virus on my computer. There are 3 symptoms: Banners are being displayed on pages that doesn't use commercials, for instance when viewing screen-shots on Steam. It is only displayed after the rest of the page has been loaded and seems to be injected into it. The whole page is replaced with a commercial with the option to skip the commercial. The page is replaced with a search window claiming that the page could not be found. I have tried to scan my computer with Antivir and Adaware but only found a couple of tracking cookies. I have run HijackThis but since this isn't really my area I haven't been able to discern what shouldn't be there except the line about zonealarm since I have uninstalled it. Is there anyone out there who is able to see if there is anything suspicious in the log-file at the end or has suggestions regarding programs that might be better to find the virus than Antivir and Adaware? Here is the whole (long) log: Logfile of Trend Micro HijackThis v2.0.2 Scan saved at 21:44:07, on 2010-04-15 Platform: Unknown Windows (WinNT 6.01.3504) MSIE: Internet Explorer v8.00 (8.00.7600.16385) Boot mode: Normal Running processes: C:\Windows\SysWOW64\HsMgr.exe C:\Program Files (x86)\Personal\bin\Personal.exe F:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin\ApacheMonitor.exe C:\Program Files (x86)\Avira\AntiVir Desktop\avgnt.exe F:\Program Files (x86)\PowerISO\PWRISOVM.EXE C:\Program Files (x86)\Java\jre6\bin\jusched.exe C:\Program Files\ASUS Xonar DX Audio\Customapp\ASUSAUDIOCENTER.EXE F:\Program Files (x86)\Voddler\service\VNetManager.exe C:\Program Files (x86)\Emotum\Mobile Broadband\Mobile.exe F:\Program Files\Logitech\SetPoint\x86\SetPoint32.exe C:\Program Files (x86)\Lavasoft\Ad-Aware\AAWTray.exe F:\Program Files (x86)\Mozilla Firefox\firefox.exe F:\Program Files (x86)\Trend Micro\HijackThis\HijackThis.exe R1 - HKCU\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKCU\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Page_URL = http://go.microsoft.com/fwlink/?LinkId=69157 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Default_Search_URL = http://go.microsoft.com/fwlink/?LinkId=54896 R1 - HKLM\Software\Microsoft\Internet Explorer\Main,Search Page = http://go.microsoft.com/fwlink/?LinkId=54896 R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Start Page = http://go.microsoft.com/fwlink/?LinkId=69157 R0 - HKLM\Software\Microsoft\Internet Explorer\Search,SearchAssistant = R0 - HKLM\Software\Microsoft\Internet Explorer\Search,CustomizeSearch = R0 - HKLM\Software\Microsoft\Internet Explorer\Main,Local Page = C:\Windows\SysWOW64\blank.htm R0 - HKCU\Software\Microsoft\Internet Explorer\Toolbar,LinksFolderName = F2 - REG:system.ini: UserInit=userinit.exe O2 - BHO: AcroIEHelperStub - {18DF081C-E8AD-4283-A596-FA578C2EBDC3} - C:\Program Files (x86)\Common Files\Adobe\Acrobat\ActiveX\AcroIEHelperShim.dll O2 - BHO: gwprimawega - {83bb5261-81ec-25ae-4adf-e88936738525} - C:\Windows\SysWow64\aZfJupUw.dll O2 - BHO: ZoneAlarm Toolbar Registrar - {8A4A36C2-0535-4D2C-BD3D-496CB7EED6E3} - C:\Program Files\CheckPoint\ZAForceField\WOW64\TrustChecker\bin\TrustCheckerIEPlugin.dll (file missing) O2 - BHO: Windows Live inloggningshjälpen - {9030D464-4C02-4ABF-8ECC-5164760863C6} - C:\Program Files (x86)\Common Files\Microsoft Shared\Windows Live\WindowsLiveLogin.dll O2 - BHO: Java(tm) Plug-In 2 SSV Helper - {DBC80044-A445-435b-BC74-9C25C1C588A9} - C:\Program Files (x86)\Java\jre6\bin\jp2ssv.dll O3 - Toolbar: ZoneAlarm Toolbar - {EE2AC4E5-B0B0-4EC6-88A9-BCA1A32AB107} - C:\Program Files\CheckPoint\ZAForceField\WOW64\TrustChecker\bin\TrustCheckerIEPlugin.dll (file missing) O4 - HKLM..\Run: [avgnt] "C:\Program Files (x86)\Avira\AntiVir Desktop\avgnt.exe" /min O4 - HKLM..\Run: [PWRISOVM.EXE] f:\Program Files (x86)\PowerISO\PWRISOVM.EXE O4 - HKLM..\Run: [SunJavaUpdateSched] "C:\Program Files (x86)\Java\jre6\bin\jusched.exe" O4 - HKLM..\Run: [QuickTime Task] "F:\Program Files (x86)\QuickTime\QTTask.exe" -atboottime O4 - HKLM..\Run: [Adobe Reader Speed Launcher] "C:\Program Files (x86)\Adobe\Reader 9.0\Reader\Reader_sl.exe" O4 - HKLM..\Run: [VoddlerNet Manager] f:\Program Files (x86)\Voddler\service\VNetManager.exe O4 - HKCU..\Run: [Steam] "f:\program files (x86)\steam\steam.exe" -silent O4 - HKCU..\Run: [msnmsgr] "C:\Program Files (x86)\Windows Live\Messenger\msnmsgr.exe" /background O4 - Global Startup: BankID Security Application.lnk = C:\Program Files (x86)\Personal\bin\Personal.exe O4 - Global Startup: Logitech SetPoint.lnk = ? O4 - Global Startup: Monitor Apache Servers.lnk = F:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin\ApacheMonitor.exe O13 - Gopher Prefix: O16 - DPF: {1E54D648-B804-468d-BC78-4AFFED8E262F} (System Requirements Lab) - http://www.nvidia.com/content/DriverDownload/srl/3.0.0.4/srl_bin/sysreqlab_nvd.cab O16 - DPF: {4871A87A-BFDD-4106-8153-FFDE2BAC2967} (DLM Control) - http://dlcdnet.asus.com/pub/ASUS/misc/dlm-activex-2.2.5.0.cab O16 - DPF: {D27CDB6E-AE6D-11CF-96B8-444553540000} (Shockwave Flash Object) - http://fpdownload2.macromedia.com/get/shockwave/cabs/flash/swflash.cab O17 - HKLM\System\CCS\Services\Tcpip..{5F7DB2E1-29C4-4299-A483-B68B19E9F015}: NameServer = 195.54.122.221 195.54.122.211 O17 - HKLM\System\CS1\Services\Tcpip..{5F7DB2E1-29C4-4299-A483-B68B19E9F015}: NameServer = 195.54.122.221 195.54.122.211 O17 - HKLM\System\CS2\Services\Tcpip..{5F7DB2E1-29C4-4299-A483-B68B19E9F015}: NameServer = 195.54.122.221 195.54.122.211 O23 - Service: @%SystemRoot%\system32\Alg.exe,-112 (ALG) - Unknown owner - C:\Windows\System32\alg.exe (file missing) O23 - Service: Avira AntiVir Scheduler (AntiVirSchedulerService) - Avira GmbH - C:\Program Files (x86)\Avira\AntiVir Desktop\sched.exe O23 - Service: Avira AntiVir Guard (AntiVirService) - Avira GmbH - C:\Program Files (x86)\Avira\AntiVir Desktop\avguard.exe O23 - Service: Apache2.2 - Apache Software Foundation - F:\Program Files (x86)\Apache Software Foundation\Apache2.2\bin\httpd.exe O23 - Service: Dragon Age: Origins - Content Updater (DAUpdaterSvc) - BioWare - F:\Program Files (x86)\Dragon Age\bin_ship\DAUpdaterSvc.Service.exe O23 - Service: Device Error Recovery Service (dgdersvc) - Devguru Co., Ltd. - C:\Windows\system32\dgdersvc.exe O23 - Service: @%SystemRoot%\system32\efssvc.dll,-100 (EFS) - Unknown owner - C:\Windows\System32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\fxsresm.dll,-118 (Fax) - Unknown owner - C:\Windows\system32\fxssvc.exe (file missing) O23 - Service: @keyiso.dll,-100 (KeyIso) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: SAMSUNG KiesAllShare Service (KiesAllShare) - Unknown owner - F:\Program Files (x86)\Samsung\Kies\WiselinkPro\WiselinkPro.exe O23 - Service: Lavasoft Ad-Aware Service - Lavasoft - C:\Program Files (x86)\Lavasoft\Ad-Aware\AAWService.exe O23 - Service: Logitech Bluetooth Service (LBTServ) - Logitech, Inc. - C:\Program Files\Common Files\Logishrd\Bluetooth\LBTServ.exe O23 - Service: @comres.dll,-2797 (MSDTC) - Unknown owner - C:\Windows\System32\msdtc.exe (file missing) O23 - Service: MySQL - Unknown owner - F:\Program.exe (file missing) O23 - Service: @%SystemRoot%\System32\netlogon.dll,-102 (Netlogon) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: NVIDIA Display Driver Service (nvsvc) - Unknown owner - C:\Windows\system32\nvvsvc.exe (file missing) O23 - Service: Sony Ericsson OMSI download service (OMSI download service) - Unknown owner - f:\Program Files (x86)\Sony Ericsson\Sony Ericsson PC Suite\SupServ.exe O23 - Service: @%systemroot%\system32\psbase.dll,-300 (ProtectedStorage) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%systemroot%\system32\Locator.exe,-2 (RpcLocator) - Unknown owner - C:\Windows\system32\locator.exe (file missing) O23 - Service: @%SystemRoot%\system32\samsrv.dll,-1 (SamSs) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: ServiceLayer - Nokia. - C:\Program Files (x86)\PC Connectivity Solution\ServiceLayer.exe O23 - Service: @%SystemRoot%\system32\snmptrap.exe,-3 (SNMPTRAP) - Unknown owner - C:\Windows\System32\snmptrap.exe (file missing) O23 - Service: @%systemroot%\system32\spoolsv.exe,-1 (Spooler) - Unknown owner - C:\Windows\System32\spoolsv.exe (file missing) O23 - Service: @%SystemRoot%\system32\sppsvc.exe,-101 (sppsvc) - Unknown owner - C:\Windows\system32\sppsvc.exe (file missing) O23 - Service: Steam Client Service - Valve Corporation - C:\Program Files (x86)\Common Files\Steam\SteamService.exe O23 - Service: @%SystemRoot%\system32\ui0detect.exe,-101 (UI0Detect) - Unknown owner - C:\Windows\system32\UI0Detect.exe (file missing) O23 - Service: @%SystemRoot%\system32\vaultsvc.dll,-1003 (VaultSvc) - Unknown owner - C:\Windows\system32\lsass.exe (file missing) O23 - Service: @%SystemRoot%\system32\vds.exe,-100 (vds) - Unknown owner - C:\Windows\System32\vds.exe (file missing) O23 - Service: VoddlerNet - Voddler - f:\Program Files (x86)\Voddler\service\voddler.exe O23 - Service: @%systemroot%\system32\vssvc.exe,-102 (VSS) - Unknown owner - C:\Windows\system32\vssvc.exe (file missing) O23 - Service: @%systemroot%\system32\wbengine.exe,-104 (wbengine) - Unknown owner - C:\Windows\system32\wbengine.exe (file missing) O23 - Service: @%Systemroot%\system32\wbem\wmiapsrv.exe,-110 (wmiApSrv) - Unknown owner - C:\Windows\system32\wbem\WmiApSrv.exe (file missing) O23 - Service: @%PROGRAMFILES%\Windows Media Player\wmpnetwk.exe,-101 (WMPNetworkSvc) - Unknown owner - C:\Program Files (x86)\Windows Media Player\wmpnetwk.exe (file missing) -- End of file - 8958 bytes

    Read the article

  • Windows 7 SP1 not being offered on Windows Update

    - by Ian Boyd
    i have no option to install Windows 7 Service Pack 1 (SP1) on my computer. Why is the option to install Windows 7 SP1 missing from Windows Update? i'm less interested in why the option is missing, and more interested in how to diagnose why the option to install Windows 7 SP1 is being hidden. Following the suggestions in KB2498452 - You do not have the option of downloading Windows 7 SP1 when you use Windows Update to check for updates: Confirm that Windows 7 SP1 is not already installed and that you are not running a prerelease version of Windows 7 SP1 i am not already running SP1, or a pre-release SP1: Check for pending updates Update 976902 may have to be installed on your computer before Windows 7 SP1 will be offered in Windows Update. i already have 976902 installed: Verify that an incompatible version of SafeCentral is not installed on your computer Windows SP1 may not appear in Windows Update if certain versions of SafeCentral are installed on your computer. SafeCentral is a security program that is manufactured by SafeCentral, Inc. i do not have SafeCentral installed (i've never heard of such a thing): Check whether you have Intel integrated graphics driver Igdkmd32.sys or Igdkmd64.sys and whether you upgraded the driver i do not have an Intel GMA: Make sure that you did not use vLite to customize your Windows 7 installation i did not use vLite to customize my Windows 7 installation. Again, i've never heard of such a thing. Update One: Here's proof that i've checked for updates "today" (3/2/2011): And that i'm not being presented the option of installing SP1 (i dispatched an update to Silverlight and a fix for IE9 being hosted in a Direct2D or Direct3D application; so updates themselves do work): Update Two Tried the Windows Update Troubleshooter: Window 7 Service Pack 1 is still not available. Update Three Here is the tail end of windowsupdate.log. It speaks of Evaluating application rules: Found 2 updates and 65 categories in search; evaluated appl. rules of 1324 out of 1832 deployed entities These must be the rules that say i'm not allowed to see SP1: 2011-03-03 09:21:08:091 924 db4 AU Triggering AU detection through DetectNow API 2011-03-03 09:21:08:091 924 db4 AU Triggering Online detection (interactive) 2011-03-03 09:21:08:091 924 950 AU ############# 2011-03-03 09:21:08:092 924 950 AU ## START ## AU: Search for updates 2011-03-03 09:21:08:092 924 950 AU ######### 2011-03-03 09:21:08:093 924 950 AU <<## SUBMITTED ## AU: Search for updates [CallId = {8517376A-B8A3-488B-B4D4-67DFC75788C8}] 2011-03-03 09:21:08:093 924 ca8 Agent ************* 2011-03-03 09:21:08:093 924 ca8 Agent ** START ** Agent: Finding updates [CallerId = AutomaticUpdates] 2011-03-03 09:21:08:093 924 ca8 Agent ********* 2011-03-03 09:21:08:093 924 ca8 Agent * Online = Yes; Ignore download priority = No 2011-03-03 09:21:08:093 924 ca8 Agent * Criteria = "IsInstalled=0 and DeploymentAction='Installation' or IsPresent=1 and DeploymentAction='Uninstallation' or IsInstalled=1 and DeploymentAction='Installation' and RebootRequired=1 or IsInstalled=0 and DeploymentAction='Uninstallation' and RebootRequired=1" 2011-03-03 09:21:08:093 924 ca8 Agent * ServiceID = {7971F918-A847-4430-9279-4A52D1EFE18D} Third party service 2011-03-03 09:21:08:093 924 ca8 Agent * Search Scope = {Machine} 2011-03-03 09:21:08:094 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab: 2011-03-03 09:21:08:097 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:287 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab: 2011-03-03 09:21:08:289 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:292 924 ca8 Agent Checking for updated auth cab for service 7971f918-a847-4430-9279-4a52d1efe18d at http://download.windowsupdate.com/v9/microsoftupdate/redir/muauth.cab 2011-03-03 09:21:08:292 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\AuthCabs\authcab.cab: 2011-03-03 09:21:08:294 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:354 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\AuthCabs\authcab.cab: 2011-03-03 09:21:08:356 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:356 924 ca8 Setup Checking for agent SelfUpdate 2011-03-03 09:21:08:356 924 ca8 Setup Client version: Core: 7.3.7600.16385 Aux: 7.3.7600.16385 2011-03-03 09:21:08:357 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab: 2011-03-03 09:21:08:359 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:418 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\9482F4B4-E343-43B6-B170-9A65BC822C77\muv4wuredir.cab: 2011-03-03 09:21:08:420 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:422 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\SelfUpdate\wuident.cab: 2011-03-03 09:21:08:424 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:655 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\SelfUpdate\wuident.cab: 2011-03-03 09:21:08:658 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:659 924 ca8 Setup Skipping SelfUpdate check based on the /SKIP directive in wuident 2011-03-03 09:21:08:659 924 ca8 Setup SelfUpdate check completed. SelfUpdate is NOT required. 2011-03-03 09:21:08:808 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\7971F918-A847-4430-9279-4A52D1EFE18D\muv4muredir.cab: 2011-03-03 09:21:08:810 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:872 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\7971F918-A847-4430-9279-4A52D1EFE18D\muv4muredir.cab: 2011-03-03 09:21:08:874 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:08:876 924 ca8 PT +++++++++++ PT: Synchronizing server updates +++++++++++ 2011-03-03 09:21:08:877 924 ca8 PT + ServiceId = {7971F918-A847-4430-9279-4A52D1EFE18D}, Server URL = https://www.update.microsoft.com/v6/ClientWebService/client.asmx 2011-03-03 09:21:13:958 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\7971F918-A847-4430-9279-4A52D1EFE18D\muv4muredir.cab: 2011-03-03 09:21:13:960 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:14:083 924 ca8 Misc Validating signature for C:\Windows\SoftwareDistribution\WuRedir\7971F918-A847-4430-9279-4A52D1EFE18D\muv4muredir.cab: 2011-03-03 09:21:14:085 924 ca8 Misc Microsoft signed: Yes 2011-03-03 09:21:14:087 924 ca8 PT +++++++++++ PT: Synchronizing extended update info +++++++++++ 2011-03-03 09:21:14:087 924 ca8 PT + ServiceId = {7971F918-A847-4430-9279-4A52D1EFE18D}, Server URL = https://www.update.microsoft.com/v6/ClientWebService/client.asmx 2011-03-03 09:21:14:395 924 ca8 Agent * Added update {414642E2-5F20-4AD1-AA5A-773061238B5F}.101 to search result 2011-03-03 09:21:14:395 924 ca8 Agent * Added update {56D5FC3D-9AC8-44F1-A248-8C397A24D02F}.100 to search result 2011-03-03 09:21:14:395 924 ca8 Agent * Found 2 updates and 65 categories in search; evaluated appl. rules of 1324 out of 1832 deployed entities 2011-03-03 09:21:14:396 924 ca8 Agent ********* 2011-03-03 09:21:14:396 924 ca8 Agent ** END ** Agent: Finding updates [CallerId = AutomaticUpdates] 2011-03-03 09:21:14:396 924 ca8 Agent ************* 2011-03-03 09:21:14:404 924 ce0 AU >>## RESUMED ## AU: Search for updates [CallId = {8517376A-B8A3-488B-B4D4-67DFC75788C8}] 2011-03-03 09:21:14:404 924 ce0 AU # 2 updates detected 2011-03-03 09:21:14:404 924 ce0 AU ######### 2011-03-03 09:21:14:404 924 ce0 AU ## END ## AU: Search for updates [CallId = {8517376A-B8A3-488B-B4D4-67DFC75788C8}] 2011-03-03 09:21:14:404 924 ce0 AU ############# 2011-03-03 09:21:14:404 924 ce0 AU Successfully wrote event for AU health state:0 2011-03-03 09:21:14:405 924 ce0 AU ############# 2011-03-03 09:21:14:405 924 ce0 AU ## START ## AU: Refresh featured updates info 2011-03-03 09:21:14:405 924 ce0 AU ######### 2011-03-03 09:21:14:405 924 ce0 AU No featured updates available. 2011-03-03 09:21:14:405 924 ce0 AU ######### 2011-03-03 09:21:14:405 924 ce0 AU ## END ## AU: Refresh featured updates info 2011-03-03 09:21:14:405 924 ce0 AU ############# 2011-03-03 09:21:14:405 924 ce0 AU No featured updates notifications to show 2011-03-03 09:21:14:405 924 ce0 AU AU setting next detection timeout to 2011-03-04 08:03:53 2011-03-03 09:21:14:405 924 ce0 AU Setting AU scheduled install time to 2011-03-04 08:00:00 2011-03-03 09:21:14:405 924 ce0 AU Successfully wrote event for AU health state:0 2011-03-03 09:21:14:406 924 ce0 AU Successfully wrote event for AU health state:0 2011-03-03 09:21:14:407 924 db4 AU Getting featured update notifications. fIncludeDismissed = true 2011-03-03 09:21:14:408 924 db4 AU No featured updates available. 2011-03-03 09:21:19:396 924 ca8 Report REPORT EVENT: {633538B3-030E-4CAD-BE6B-33C6ED65AFF1} 2011-03-03 09:21:14:395-0500 1 147 101 {00000000-0000-0000-0000-000000000000} 0 0 AutomaticUpdates Success Software Synchronization Windows Update Client successfully detected 2 updates. 2011-03-03 09:21:19:396 924 ca8 Report CWERReporter finishing event handling. (00000000) i'm less interested in why the option to install Windows 7 SP1 is missing, and more interested in how to diagnose why the option to install Windows 7 SP1 is being hidden. The KB article says that SP1 will not be offered if your machine doesn't meet some secret special criteria. How can i discover what that secret criteria is? i presume it is logged somewhere. Nor am i particularly interested in a direct download link. i want to learn here. i want to be able to diagnose (i.e. in the future) why an update is not being offered. i'm a superuser here. Rather than others coming up with a checklist of things to try, i want to be able to come up with the checklist.

    Read the article

  • My computer freezes irregularly

    - by Manhim
    My computer started to freeze at irregular times for 3 weeks now. Please note that this question change with each things that i try. (For additional details) What happens My computer freezes, the video stops. (No graphic glitches, it just stops) Sound keeps playing up to some time (Usually 10-30 seconds) then stops playing. Sometimes, randomly, the screen on my G-15 keyboard flickers and I see caracters not at the right places. Usually happens for about 1-2 seconds and a bit before my computer freezes. I have to keep the power button pressed for 4 seconds to shut my computer down. I still hear my hard drives and fans working. Sometimes it works with no problems for a full day, some other times it just keeps freezing each time I restart my computer and I have to leave it for the rest of the day. Sometimes my mouse freezes for a fraction of a second (Like 0.01 to 0.2 seconds) quite randomly, usually before it freezes. No errors spotted by the "Action center" unlike when I had problems with my last video card on this system (Driver errors). My G-15 LCD screen also freezes. Sometimes my G-15 LCD screen flickers and caracters gets caried around temporary under heavy load. Now, most of the times, the BIOS hard disks boot order gets reversed for some reason and I have to put it to the right one and save each times I boot. (Might be unrelated, not sure, but it first started yesterday) Sometimes the BIOS doesn't detect my 750GB hard drive plugged in SATA1. What I did so far I have had similar problems in the past and I had changed my hard drive (It was faulty), so I tested my software RAID-0 array and it was faulty so I changed it. (I reinstalled Windows 7 with this part). I also tested with unplugging my secondary hard drive. My CPU was running at about 100 degree Celsius, I removed the dust between the fans and the heatsink and it's now between 45-55. I ran a CPU stress-test and it didn't freeze during the tests (using Prime95 on all cores) Ran a memory test (using memtest86+) for a single pass and there were no errors. Ran a GPU stress test with ati-tools and furmark and it didn't freeze during the tests. (No artefacts either) I had troubles with my graphic card when I got it, but I think that it got fixed with a driver update. I checked the voltages in my BIOS setup and they all seemed ok (±0.2 I think). I have ran on the computer without problems with Fedora 15 on an external hard drive (Appart that it couldn't load Gnome 3 and was reverting to Gnome 2, didn't want to install drivers since I use it on multiple computers) I used it to backup my files from the raid array to my 1TB hard drive for the reinstallation of Windows. (So the crashes only happenned on Windows) [The external hard drive is plugged directly on a SATA port] I contacted EVGA (My graphic card vendor) and pointed them on this question, I'm looking for an answer. Ran sensors on Fedora 15 and got this output: http://pastebin.com/0BHJnAvu Ran 6 short different CPU stress test on Fedora 15 (Haven't found any complete stress testers for Linux) and it didn't crash. Changed the thermal paste to some Artic Silver 5 for my CPU and stress tested the CPU, temperature was at 50 idle, then 64 highest and slowly went down to 62 during the test. Ran some stress testing with a temporary graphic card and it went ok. Ran furmark stress test with my original graphic card and it freezed again. GPU had a temp of 74C, a CPU temp of 58C and a mobo temp of 40C or 45C (Dunno which one it is from SpeedFan). Ran a furmark stress test and a CPU stress test at the same time, results: http://pastebin.com/2t6PLpdJ I have been using my computer without stressing it for about 2 hours now and no crashes yet. I also have disabled the AMD Cool'n'quiet function on the BIOS for a more regular power to the CPU. When I ran Furmark without C'n'q my computer didn't freeze but I had a "Driver Kernel Error" that have recovered (And Furmark crashed) all that while running a CPU stress test. The computer eventually frozed without me being at it, but this time my screen just went on sleep and I couldn't wake it. Using the stability tester in nTune my computer freezed again (In the same manner as before). I notived that Speedfan gives me a -12V of -16.97V and a -5V of -8.78V. I wonder if these numbers are reliable and if they are good or bad. I have swapped my G-15 with another basic USB keyboard (HP) and I have ran furmark for about 10 minutes with a CPU stability test running each 60 seconds for 30 seconds and my computer haven't crashed yet. Ran some more extended tests without my G-15 and it freezed like it usually do. Removed the nForce Hard disk controler. Disabled command queuing in the NVIDIA nForce SATA Controller for both port 0 and port 1 (Errors from the logs) Used CPUID HwMonitor, here are the voltages: http://pastebin.com/dfM7p4jV Changed some configurations in the motherboard BIOS: Disabled PEG Link Mode, Changed AI Tuning to Standard, Disabled the 1394 Controller, Disabled HD Audio, Disabled JMicron RAID controller and Disabled SATA Raid. When it happens When I play video games (Mostly) When I play flash games (Second most) When I'm looking at my desktop background (It rarely happens when I have a window open, but it does, sometimes) When my Graphic card and my CPU are stressed. Sometimes when my Graphic card is stressed. Never happenned while stressing only the CPU. Sometimes when my CPU is stressed. Specs Windows Seven x64 Home Premium Motherboard: M2N-SLI Deluxe CPU: AMD Phenom 9950 x2 @ 2.6GHz Memory: Kingston 4x2GB Dual Channel (Pretty basic memory sticks) Hard drives: Was 2x250GB (Western digital caviar) in raid-0 + 1TB (WD caviar black), I replaced the raid array with a 750GB (WD caviar black) [Yes I removed the array from the raid configurations] 750W Power supply No overcloking. Ever. There have been some power-downs like 4-5 weeks ago, but the problem didn't start immediately after. (I wasn't home, so my computer got shut-down) Event logs (Warnings, errors and critical errors) for the last 24 hours: http://pastebin.com/Bvvk31T7 My current to-try list Reinstall the drivers and software 1 by 1 and do extensive stress testing between each. Update the BIOS firmware to the most recent stable one. Change my motherboard. Status updates Keeping only the last 3 (28/06 04pm) More stress testing and still pass the tests. (28/06 03pm) Been stress testing for 10 minute straight now and 5 minutes with both CPU and GPU being stressed at the same time. (28/06 03pm) Stress-testing right now, so far no problems. A little hope Tests with Furmark and Prime95. Testing Windows bare-bone: 30 Minutes stress, no freeze. Installing an Anti-virus and some software, restarting computer. Testing with Anti-virus and some software (No drivers installed): 30 Minutes stress, no freeze. Installing audio drivers, restarting computer. Testing with the audio drivers: 30 Minutes stress, no freeze. Installing the latest graphic drivers from EVGA's website (without 3d vision since I don't use it), restarting computer. Testing with the graphic drivers: 30 Minutes stress, no freeze. Configuring Windows to my liking and installing more softwares. In this situation, how can I successfully pin-point the current hardware problem? (If it's a hardware problem) Because I don't really have the budget to just forget and replace everything. I also don't really have hardware to test-replace current hardware.

    Read the article

  • OS X won't see Windows 7 in network (and vice versa)

    - by meds
    I've enabled SMB sharing in OS X Lion and have added folders to share, it says 'Windows Sharing: On' with a green circle next to it (from the sharing window) and that to access the volume I will need to to go to \\192.168.0.17. It also says that the OS X should be visible as 'macbook' in the network. Both my WIndows 7 and OS X are connected to the same network, yet when I try to go to \\192.168.0.17 or from the Mac try to go to my Windows system (smb://192.168.0.6) the two OSs don't see each other. Any ideas why? Attempting to ping the Mac from Windows results in this output in the command prompt: Pinging 192.168.0.17 with 32 bytes of data: Reply from 192.168.0.6: Destination host unreachable. Request timed out. Request timed out. Request timed out. Ping statistics for 192.168.0.17: Packets: Sent = 4, Received = 1, Lost = 3 (75% loss), ipconfig in Windows is: Wireless LAN adapter Wireless Network Connection: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::8918:efd1:b05c:890f%21 IPv4 Address. . . . . . . . . . . : 192.168.0.6 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : 192.168.0.1 Ethernet adapter Bluetooth Network Connection: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Ethernet adapter VMware Network Adapter VMnet1: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::98ab:63fc:3c07:d837%13 IPv4 Address. . . . . . . . . . . : 192.168.74.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Ethernet adapter VMware Network Adapter VMnet8: Connection-specific DNS Suffix . : Link-local IPv6 Address . . . . . : fe80::80ff:c575:7b50:3a10%14 IPv4 Address. . . . . . . . . . . : 192.168.21.1 Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : Tunnel adapter isatap.{2E97D0AE-9E18-4072-AC23-1979BA0DCB79}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{E260CE43-E9A7-4DE0-A88E-4EAFF68ACDDB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{A5130812-59CE-4DDF-9C35-9433BCED9831}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter Teredo Tunneling Pseudo-Interface: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{134BCAE7-CFFF-4A98-8DA0-3708806AABEB}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : Tunnel adapter isatap.{8D9E3B8F-161C-4ACE-B211-3EDD694416B2}: Media State . . . . . . . . . . . : Media disconnected Connection-specific DNS Suffix . : in OS X: lo0: flags=8049<UP,LOOPBACK,RUNNING,MULTICAST> mtu 16384 options=3<RXCSUM,TXCSUM> inet6 fe80::1%lo0 prefixlen 64 scopeid 0x1 inet 127.0.0.1 netmask 0xff000000 inet6 ::1 prefixlen 128 gif0: flags=8010<POINTOPOINT,MULTICAST> mtu 1280 stf0: flags=0<> mtu 1280 en0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 options=2b<RXCSUM,TXCSUM,VLAN_HWTAGGING,TSO4> ether c8:2a:14:01:24:c1 media: autoselect (none) status: inactive en1: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 1500 ether e0:f8:47:0c:fe:04 inet6 fe80::e2f8:47ff:fe0c:fe04%en1 prefixlen 64 scopeid 0x5 inet 192.168.0.17 netmask 0xffffff00 broadcast 192.168.0.255 media: autoselect status: active p2p0: flags=8843<UP,BROADCAST,RUNNING,SIMPLEX,MULTICAST> mtu 2304 ether 02:f8:47:0c:fe:04 media: autoselect status: inactive fw0: flags=8863<UP,BROADCAST,SMART,RUNNING,SIMPLEX,MULTICAST> mtu 4078 lladdr 70:cd:60:ff:fe:d8:f1:32 media: autoselect <full-duplex> status: inactive

    Read the article

  • After segment lost TCP connection never recovers

    - by mvladic
    Take a look at following trace taken with Wireshark: http://dl.dropbox.com/u/145579/trace1.pcap or http://dl.dropbox.com/u/145579/trace2.pcap I will repeat here an interesting part (from trace1.pcap): No. Time Source Destination Protocol Length Info 1850 2012-02-09 13:44:32.609 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=581704 Win=65392 Len=0 1851 2012-02-09 13:44:32.610 192.168.4.213 172.22.37.4 COTP 550 DT TPDU (0) [COTP fragment, 509 bytes] 1852 2012-02-09 13:44:32.639 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1853 2012-02-09 13:44:32.639 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=582736 Win=65392 Len=0 1854 2012-02-09 13:44:32.657 192.168.4.213 172.22.37.4 TCP 590 [TCP Previous segment lost] 62479 > iso-tsap [ACK] Seq=583232 Ack=345 Win=65191 Len=536 1855 2012-02-09 13:44:32.657 192.168.4.213 172.22.37.4 TCP 108 [TCP segment of a reassembled PDU] 1856 2012-02-09 13:44:32.657 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1853#1] iso-tsap > 62479 [ACK] Seq=345 Ack=582736 Win=65392 Len=0 1857 2012-02-09 13:44:32.657 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1853#2] iso-tsap > 62479 [ACK] Seq=345 Ack=582736 Win=65392 Len=0 1858 2012-02-09 13:44:32.675 192.168.4.213 172.22.37.4 COTP 590 [TCP Fast Retransmission] DT TPDU (0) [COTP fragment, 509 bytes] 1859 2012-02-09 13:44:32.715 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1860 2012-02-09 13:44:32.715 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=583272 Win=65392 Len=0 1861 2012-02-09 13:44:32.796 192.168.4.213 172.22.37.4 COTP 590 [TCP Retransmission] DT TPDU (0) EOT 1862 2012-02-09 13:44:32.945 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1863 2012-02-09 13:44:32.945 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=583808 Win=65392 Len=0 1864 2012-02-09 13:44:32.963 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1865 2012-02-09 13:44:32.963 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1863#1] iso-tsap > 62479 [ACK] Seq=345 Ack=583808 Win=65392 Len=0 1866 2012-02-09 13:44:32.963 192.168.4.213 172.22.37.4 TCP 576 [TCP segment of a reassembled PDU] 1867 2012-02-09 13:44:32.963 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1863#2] iso-tsap > 62479 [ACK] Seq=345 Ack=583808 Win=65392 Len=0 1868 2012-02-09 13:44:33.235 192.168.4.213 172.22.37.4 COTP 590 [TCP Retransmission] DT TPDU (0) [COTP fragment, 509 bytes] 1869 2012-02-09 13:44:33.434 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=584344 Win=65392 Len=0 1870 2012-02-09 13:44:33.447 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1871 2012-02-09 13:44:33.447 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1869#1] iso-tsap > 62479 [ACK] Seq=345 Ack=584344 Win=65392 Len=0 1872 2012-02-09 13:44:33.806 192.168.4.213 172.22.37.4 COTP 590 [TCP Retransmission] DT TPDU (0) [COTP fragment, 509 bytes] 1873 2012-02-09 13:44:34.006 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=584880 Win=65392 Len=0 1874 2012-02-09 13:44:34.018 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1875 2012-02-09 13:44:34.018 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1873#1] iso-tsap > 62479 [ACK] Seq=345 Ack=584880 Win=65392 Len=0 1876 2012-02-09 13:44:34.932 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1877 2012-02-09 13:44:35.132 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=585416 Win=65392 Len=0 1878 2012-02-09 13:44:35.144 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1879 2012-02-09 13:44:35.144 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1877#1] iso-tsap > 62479 [ACK] Seq=345 Ack=585416 Win=65392 Len=0 1880 2012-02-09 13:44:37.172 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1881 2012-02-09 13:44:37.372 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=585952 Win=65392 Len=0 1882 2012-02-09 13:44:37.385 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1883 2012-02-09 13:44:37.385 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1881#1] iso-tsap > 62479 [ACK] Seq=345 Ack=585952 Win=65392 Len=0 1884 2012-02-09 13:44:41.632 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1885 2012-02-09 13:44:41.832 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=586488 Win=65392 Len=0 1886 2012-02-09 13:44:41.844 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1887 2012-02-09 13:44:41.844 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1885#1] iso-tsap > 62479 [ACK] Seq=345 Ack=586488 Win=65392 Len=0 1888 2012-02-09 13:44:50.554 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1889 2012-02-09 13:44:50.753 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=587024 Win=65392 Len=0 1890 2012-02-09 13:44:50.766 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1891 2012-02-09 13:44:50.766 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1889#1] iso-tsap > 62479 [ACK] Seq=345 Ack=587024 Win=65392 Len=0 1892 2012-02-09 13:45:08.385 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1893 2012-02-09 13:45:08.585 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=587560 Win=65392 Len=0 1894 2012-02-09 13:45:08.598 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1895 2012-02-09 13:45:08.598 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1893#1] iso-tsap > 62479 [ACK] Seq=345 Ack=587560 Win=65392 Len=0 1896 2012-02-09 13:45:44.059 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] 1897 2012-02-09 13:45:44.259 172.22.37.4 192.168.4.213 TCP 54 iso-tsap > 62479 [ACK] Seq=345 Ack=588096 Win=65392 Len=0 1898 2012-02-09 13:45:44.272 192.168.4.213 172.22.37.4 COTP 590 DT TPDU (0) [COTP fragment, 509 bytes] 1899 2012-02-09 13:45:44.272 172.22.37.4 192.168.4.213 TCP 54 [TCP Dup ACK 1897#1] iso-tsap > 62479 [ACK] Seq=345 Ack=588096 Win=65392 Len=0 1900 2012-02-09 13:46:55.386 192.168.4.213 172.22.37.4 TCP 590 [TCP Retransmission] [TCP segment of a reassembled PDU] Some background information (not much, unfortunately, as I'm responsible only for server part): Server (172.22.37.4) is Windows Server 2008 R2 and client (192.168.4.213) is Ericsson telephone exchange of whom I do not know much. Client sends a file to server using FTAM protocol. This problem happens very often. I think, either client or server is doing sliding window protocol wrong. Server sends dup ack, client retransmits lost packet, but soon after client sends packets with wrong seq. Again, Server sends dup ack, client retransmits lost packet - but, this time with longer retransmission timeout. Again, client sends packet with wrong seq. Etc... Retransmission timeout grows to circa 4 minutes and communications never recovers to normal.

    Read the article

  • CentOS Client - Unable to Establish iSCSI connection with multiple interfaces on the initiator

    - by slashdot
    So after upgrading to CentOS 6.2, I am seemingly no longer able to login into my iSCSI targets. I have multiple interfaces on different subnets on the system, and I first thought that it had to do with the fact that I may not be binding correct interfaces, which seems to be the case when looking at netstat, as this is clearly wrong: [root]? netstat -na|grep .90 tcp 0 1 10.10.100.60:42354 10.10.8.90:3260 SYN_SENT tcp 0 1 10.10.100.60:40777 10.10.9.90:3260 SYN_SENT I then went ahead and disabled all but one interface, and so as a result netstat appears to be correct, but the issue with login remains. I am positive that the target never sees a packet, because I see nothing by SYN_SENT. I know the problem is on my client, because the target is servicing multiple systems, none of which are CentOS 6.2. At this point I am pretty confident that some things changed between CentOS 6.0/6.1 and 6.2. So, if anyone have any thoughts, or ran into this, I would very much like to hear your thoughts. [root]? iscsiadm --mode node --targetname iqn.2011-12.dom.homer:01:lab-centos-servers-00001 --portal 10.10.8.90:3260,2 --interface=sw-iscsi-0 --login Logging in to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260] (multiple) iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals [root]? netstat -rn Kernel IP routing table Destination Gateway Genmask Flags MSS Window irtt Iface 10.10.8.0 0.0.0.0 255.255.255.0 U 0 0 0 eth2.7 10.10.9.0 0.0.0.0 255.255.255.0 U 0 0 0 eth3.7 10.10.100.0 0.0.0.0 255.255.252.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth0 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth1 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth2.7 169.254.0.0 0.0.0.0 255.255.0.0 U 0 0 0 eth3.7 0.0.0.0 10.10.100.1 0.0.0.0 UG 0 0 0 eth0 Output of ip addr show for the two interfaces involved: [root]? for i in 2.7 3.7; do ip addr show eth$i; done 6: eth2.7@eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:8d brd ff:ff:ff:ff:ff:ff inet 10.10.8.60/24 brd 10.10.8.255 scope global eth2.7 inet6 fe80::20c:29ff:fe94:5b8d/64 scope link valid_lft forever preferred_lft forever 7: eth3.7@eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP link/ether 00:0c:29:94:5b:97 brd ff:ff:ff:ff:ff:ff inet 10.10.9.60/24 brd 10.10.9.255 scope global eth3.7 inet6 fe80::20c:29ff:fe94:5b97/64 scope link valid_lft forever preferred_lft forever Update 01/06/2012: This issue is getting even more interesting by the day it seems. I went a few weeks back and grabbed a snapshot of this system from before upgrading to 6.2. I spun up a new system from the snapshot, and reconfigured interface info and host keys, as well as iSCSI initiator and iscsi interface info to match new MACs. Changed nothing else. Then, I attempted to connect to my targets, and no issues at all. I cannot say that this was unexpected. I then went ahead and compared sysctl settings from both systems and there were differences after the upgrade, but nothing seemingly relevant to iSCSI or IP that could contribute to this. I also noticed that by default now two sessions per connection were enabled after the upgrade, but I changed it back to 1 session in /etc/iscsi/iscsid.conf. On the problematic system we can see that source interface is seemingly wrong, but even when I disable the 10.10.100 interface, problems persist. So, while this may be relevant, I could not validate it for certain. Needless to say, further research is necessary. Something is clearly different between releases. Working system is on 6.1, and non-working is 6.2. ::Working System:: tcp 0 0 10.10.8.210:39566 10.10.8.90:3260 ESTABLISHED tcp 0 0 10.10.9.210:46518 10.10.9.90:3260 ESTABLISHED [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.210 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.210 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.210 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 ::Non-working System:: tcp 0 1 10.10.100.60:44737 10.10.9.90:3260 SYN_SENT tcp 0 1 10.10.100.60:55479 10.10.8.90:3260 SYN_SENT [root]? ip route show 10.10.8.0/24 dev eth2.6 proto kernel scope link src 10.10.8.60 10.10.9.0/24 dev eth3.7 proto kernel scope link src 10.10.9.60 10.10.100.0/22 dev eth0 proto kernel scope link src 10.10.100.60 169.254.0.0/16 dev eth0 scope link metric 1002 169.254.0.0/16 dev eth2.6 scope link metric 1006 169.254.0.0/16 dev eth3.7 scope link metric 1007 default via 10.10.100.1 dev eth0 And the result is still same: [root]? iscsiadm: Could not login to [iface: sw-iscsi-0, target: iqn.2011-12.dom.homer:01:lab-centos-servers-00001, portal: 10.10.8.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not login to [iface: sw-iscsi-1, target: iqn.2011-12.dom.homer:02:lab-centos-servers-00001, portal: 10.10.9.90,3260]. iscsiadm: initiator reported error (8 - connection timed out) iscsiadm: Could not log into all portals Update 01/08/2012: I believe I have been able to figure out the answer to my issue. It is quite obscure and I doubt this will happen to anyone else any time soon. It turns out that setting iface.iscsi_ifacename and iface.hwaddress in the interfaces configuration file is not legal. When one manually adds an iscsi target, such as below, all settings from the interface config file are copied into the node config file, that gets created by the below command. Result is parameters iface.iscsi_ifacename and iface.hwaddress together in the same config file. These parameters are seemingly mutually exclusive, which does not exactly make sense, or there is perhaps an oversight in the codepath. Perhaps I will investigate further. # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:01:lab-centos-servers-00001 -p 10.10.8.90,3260,2 -I sw-iscsi-0 # iscsiadm -m node --op new -T iqn.2011-12.dom.homer:02:lab-centos-servers-00001 -p 10.10.9.90,3260,2 -I sw-iscsi-1 Notice, below I commented out iface.hwaddress and iface.ipaddress, after which I re-added targets, with same command as above. All works just fine. [root]? cat * # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-0 iface.net_ifacename = eth2.6 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.8.60 iface.transport_name = tcp iface.vlan_id = 6 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD # BEGIN RECORD 2.0-872.33.el6 iface.iscsi_ifacename = sw-iscsi-1 iface.net_ifacename = eth3.7 #iface.hwaddress = XX:XX:XX:XX:XX:XX #iface.ipaddress = 10.10.9.60 iface.transport_name = tcp iface.vlan_id = 7 iface.vlan_priority = 0 iface.iface_num = 0 iface.mtu = 0 iface.port = 0 # END RECORD Again, chances of this happening to someone else are slim to none, so likely waste of time typing this up. But, if someone does encounter this issue, I hope this post will help.

    Read the article

  • How to Upload a file from client to server using OFBIZ?

    - by SIVAKUMAR.J
    Hi all, Im new to ofbiz.So is my question is have any mistake forgive me for my mistakes.Im new to ofbiz so i did not know some terminologies in ofbiz.Sometimes my question is not clear because of lack of knowledge in ofbiz.So try to understand my question and give me a good solution with respect to my level.Because some solutions are in very high level cannot able to understand for me.So please give the solution with good examples. My problem is i created a project inside the ofbiz/hot-deploy folder namely "productionmgntSystem".Inside the folder "ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem" i created a .ftl file namely "app_details_1.ftl" .The following are the coding of this file <html> <head> <meta http-equiv="Content-Type" content="text/html; charset=ISO-8859-1"> <title>Insert title here</title> <script TYPE="TEXT/JAVASCRIPT" language=""JAVASCRIPT"> function uploadFile() { //alert("Before calling upload.jsp"); window.location='<@ofbizUrl>testing_service1</@ofbizUrl>' } </script> </head> <!-- <form action="<@ofbizUrl>testing_service1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> --> <form action="<@ofbizUrl>logout1</@ofbizUrl>" enctype="multipart/form-data" name="app_details_frm"> <center style="height: 299px; "> <table border="0" style="height: 177px; width: 788px"> <tr style="height: 115px; "> <td style="width: 103px; "> <td style="width: 413px; "><h1>APPLICATION DETAILS</h1> <td style="width: 55px; "> </tr> <tr> <td style="width: 125px; ">Application name : </td> <td> <input name="app_name_txt" id="txt_1" value=" " /> </td> </tr> <tr> <td style="width: 125px; ">Excell sheet &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;: </td> <td> <input type="file" name="filename"/> </td> </tr> <tr> <td> <!-- <input type="button" name="logout1_cmd" value="Logout" onclick="logout1()"/> --> <input type="submit" name="logout_cmd" value="logout"/> </td> <td> <!-- <input type="submit" name="upload_cmd" value="Submit" /> --> <input type="button" name="upload1_cmd" value="Upload" onclick="uploadFile()"/> </td> </tr> </table> </center> </form> </html> the following coding is present in the file "ofbiz\hot-deploy\productionmgntSystem\webapp\productionmgntSystem\WEB-INF\controller.xml" ...... ....... ........ <request-map uri="testing_service1"> <security https="true" auth="true"/> <event type="java" path="org.ofbiz.productionmgntSystem.web_app_req.WebServices1" invoke="testingService"/> <response name="ok" type="view" value="ok_view"/> <response name="exception" type="view" value="exception_view"/> </request-map> .......... ............ .......... <view-map name="ok_view" type="ftl" page="ok_view.ftl"/> <view-map name="exception_view" type="ftl" page="exception_view.ftl"/> ................ ............. ............. The following are the coding present in the file "ofbiz\hot-deploy\productionmgntSystem\src\org\ofbiz\productionmgntSystem\web_app_req\WebServices1.java" package org.ofbiz.productionmgntSystem.web_app_req; import javax.servlet.http.HttpServletRequest; import javax.servlet.http.HttpServletResponse; import java.io.DataInputStream; import java.io.FileOutputStream; import java.io.IOException; public class WebServices1 { public static String testingService(HttpServletRequest request, HttpServletResponse response) { //int i=0; String result="ok"; System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- Start"); String contentType=request.getContentType(); System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- contentType : "+contentType); String str=new String(); // response.setContentType("text/html"); //PrintWriter writer; if ((contentType != null) && (contentType.indexOf("multipart/form-data") >= 0)) { System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) after if (contentType != null)"); try { // writer=response.getWriter(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try Start"); DataInputStream in = new DataInputStream(request.getInputStream()); int formDataLength = request.getContentLength(); byte dataBytes[] = new byte[formDataLength]; int byteRead = 0; int totalBytesRead = 0; //this loop converting the uploaded file into byte code while (totalBytesRead < formDataLength) { byteRead = in.read(dataBytes, totalBytesRead,formDataLength); totalBytesRead += byteRead; } String file = new String(dataBytes); //for saving the file name String saveFile = file.substring(file.indexOf("filename=\"") + 10); saveFile = saveFile.substring(0, saveFile.indexOf("\n")); saveFile = saveFile.substring(saveFile.lastIndexOf("\\")+ 1,saveFile.indexOf("\"")); int lastIndex = contentType.lastIndexOf("="); String boundary = contentType.substring(lastIndex + 1,contentType.length()); int pos; //extracting the index of file pos = file.indexOf("filename=\""); pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; pos = file.indexOf("\n", pos) + 1; int boundaryLocation = file.indexOf(boundary, pos) - 4; int startPos = ((file.substring(0, pos)).getBytes()).length; int endPos = ((file.substring(0, boundaryLocation)).getBytes()).length; //creating a new file with the same name and writing the content in new file FileOutputStream fileOut = new FileOutputStream("/"+saveFile); fileOut.write(dataBytes, startPos, (endPos - startPos)); fileOut.flush(); fileOut.close(); System.out.println("\n\n\t**********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - try End"); } catch(IOException ioe) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch IOException"); //ioe.printStackTrace(); return("exception"); } catch(Exception ex) { System.out.println("\n\n\t*********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) - Catch Exception"); return("exception"); } } else { System.out.println("\n\n\t********************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response) else part"); result="exception"; } System.out.println("\n\n\t*************************************\n\tInside WebServices1.testingService(HttpServletRequest request, HttpServletResponse response)- End"); return(result); } } I want to upload a file to the server.The file is get from user "<input type="file"..> tag in the "app_details_1.ftl" file & it is updated into the server by using the method "testingService(HttpServletRequest request, HttpServletResponse response)" in the class "WebServices1".But the file is not uploaded. Give me a good solution for uploading a file to the server. Thanks & Regards, Sivakumar.J

    Read the article

  • How to run a DHCP service on Windows 7 Home

    - by Joshua Lim
    I'm trying to setup a DHCP server on Windows 7 Home, tried using a couple of freeware which I found on the Internet but none seemed to work. What I did: On the Windows 7 machine which I install the DHCP Server with the range 192.168.1.12-192.168.1.256. I set the Gigabit Ethernet adapter to a static IP address of 192.168.1.11 and subnet mask of 255.255.255.0. When I did an IP config, it showed. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Broadcom NetLink (TM) Gigabit Ethernet Physical Address. . . . . . . . . : 8C-73-6E-75-A7-56 DHCP Enabled. . . . . . . . . . . : No Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::196d:b6bb:8f93:2555%12(Preferred) IPv4 Address. . . . . . . . . . . : 192.168.1.11(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.255.0 Default Gateway . . . . . . . . . : DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled I connected another Window 7 machine to the "DHCP server" using a cross cable and set network adapter on that machine to automatically detect IP address. The client fails to acquire the correct IP address from the DHCP server and showed the autoconfigured IPv4 address instead. Here's the information returned by config /all on the client machine. Ethernet adapter Local Area Connection: Connection-specific DNS Suffix . : Description . . . . . . . . . . . : Atheros AR8152/8158 PCI-E Fast Ethernet Controller (NDIS 6.20) Physical Address. . . . . . . . . : 54-04-A6-40-96-4B DHCP Enabled. . . . . . . . . . . : Yes Autoconfiguration Enabled . . . . : Yes Link-local IPv6 Address . . . . . : fe80::4885:4082:5572:5a85%12(Preferred) Autoconfiguration IPv4 Address. . : 169.254.90.133(Preferred) Subnet Mask . . . . . . . . . . . : 255.255.0.0 Default Gateway . . . . . . . . . : DHCPv6 IAID . . . . . . . . . . . : 341050534 DHCPv6 Client DUID. . . . . . . . : 00-01-00-01-17-54-78-12-00-08-CA-46-4C-5A DNS Servers . . . . . . . . . . . : fec0:0:0:ffff::1%1 fec0:0:0:ffff::2%1 fec0:0:0:ffff::3%1 NetBIOS over Tcpip. . . . . . . . : Enabled DHCP client are running on both machines. I've tried many times but failed. Googling also returned no useful information for my scenario. Have I missed out any step? Thanks.

    Read the article

  • Unable to PPTP through NAT on Cisco 881

    - by MasterRoot24
    I'm trying to connect to a PPTP server which is sat behind a Cisco 881 NAT router. The server is running Ubuntu Server 12.04 and is running Poptop pptpd as the PPTP daemon listening for connections. As discussed in my other question, I'm trying to setup a Cisco 881 router to replace my old Linksys WAG320N. This same server and WAN connection worked fine with the WAG320N with no special configuration, other than allowing 1723 in through the firewall. On the Cisco 881, I'm using the newer ip nat enable or NAT NVI to setup static routes in through the firewall for the services running behind the router. My reason being that I can't run another copy of my live DNS domains internally with local IP addresses in. For the purposes of this question, though, I have rebuilt the router with ip nat inside/outside style NAT'ing, but this issue is still apparent. HTTP/SMTP/IMAP etc. all work ok from both the WAN and LAN interfaces of the router. I'm only having issues with SIP (see other question) and PPTP. My issue is that the GRE doesn't appear to be passing through NAT correctly and one end of the connection is not receiving GRE traffic when it should be, so the server hangs up the connection. Here's an example of /var/log/syslog with debug enabled in /etc/pptpd.conf: Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: MGR: Launching /usr/sbin/pptpctrl to handle client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pppd options file = /etc/ppp/pptpd-options Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection started Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 1) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a START CTRL CONN RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 156 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 7) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Set parameters to 100000000 maxbps, 64 window size Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Made a OUT CALL RPLY packet Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Starting call (launching pppd, opening GRE) Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: pty_fd = 6 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: tty_fd = 7 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: I wrote 32 bytes to the client. Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: CTRL: Sent packet to client Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): program binary = /usr/sbin/pppd Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): local address = 192.168.1.50 Dec 11 21:06:30 <HOSTNAME> pptpd[22627]: CTRL (PPPD Launcher): remote address = 192.168.1.51 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Plugin /usr/lib/pptpd/pptpd-logwtmp.so loaded. Dec 11 21:06:30 <HOSTNAME> pppd[22627]: pppd 2.4.5 started by root, uid 0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Using interface ppp0 Dec 11 21:06:30 <HOSTNAME> pppd[22627]: Connect: ppp0 <--> /dev/pts/3 Dec 11 21:06:30 <HOSTNAME> pptpd[22626]: GRE: Bad checksum from pppd. Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Received PPTP Control Message (type: 15) Dec 11 21:06:31 <HOSTNAME> pptpd[22626]: CTRL: Got a SET LINK INFO packet with standard ACCMs Dec 11 21:07:00 <HOSTNAME> pppd[22627]: LCP: timeout sending Config-Requests Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Connection terminated. Dec 11 21:07:00 <HOSTNAME> avahi-daemon[1042]: Withdrawing workstation service for ppp0. Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Modem hangup Dec 11 21:07:00 <HOSTNAME> pppd[22627]: Exit. Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: GRE: read(fd=6,buffer=6075a0,len=8196) from PTY failed: status = -1 error = Input/output error, usually caused by unexpected termination of pppd, check option syntax and pppd logs Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: PTY read or GRE write failed (pty,gre)=(6,7) Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Reaping child PPP[22627] Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Client 82.132.248.216 control connection finished Dec 11 21:07:00 <HOSTNAME> pptpd[22626]: CTRL: Exiting now Dec 11 21:07:00 <HOSTNAME> pptpd[5803]: MGR: Reaped child 22626 As far as Cisco are concerned, all I need is ip nat source static tcp <SERVER LAN IP> 1723 interface FastEthernet4 1723 but of course this doesn't seem to the be helping the GRE traffic through as it should. Trying the connection to the LAN IP of the server from the same LAN as the server (behind the router), the PPTP connection works fine, so I'm confident that the server's config is ok. Furthermore, all I needed on my WAG320N was to open 1723 in the firewall. Here's my current router config: ! ! Last configuration change at 20:20:15 UTC Tue Dec 11 2012 by xxx version 15.2 no service pad service timestamps debug datetime msec service timestamps log datetime msec service password-encryption ! hostname xxx ! boot-start-marker boot-end-marker ! ! enable secret 4 xxxx ! aaa new-model ! ! aaa authentication login local_auth local ! ! ! ! ! aaa session-id common ! memory-size iomem 10 ! crypto pki trustpoint TP-self-signed-xxx enrollment selfsigned subject-name cn=IOS-Self-Signed-Certificate-xxx revocation-check none rsakeypair TP-self-signed-xxx ! ! crypto pki certificate chain TP-self-signed-xxx certificate self-signed 01 xxx quit ip gratuitous-arps ip auth-proxy max-login-attempts 5 ip admission max-login-attempts 5 ! ! ! ! ! ip domain list dmz.xxx.local ip domain list xxx.local ip domain name dmz.xxx.local ip name-server 192.168.1.x ip cef login block-for 3 attempts 3 within 3 no ipv6 cef ! ! multilink bundle-name authenticated license udi pid CISCO881-SEC-K9 sn xxx ! ! username admin privilege 15 secret 4 xxx username joe secret 4 xxx ! ! ! ! ! ip ssh time-out 60 ! ! ! ! ! ! ! ! ! interface FastEthernet0 no ip address ! interface FastEthernet1 no ip address ! interface FastEthernet2 no ip address ! interface FastEthernet3 switchport access vlan 2 no ip address ! interface FastEthernet4 ip address dhcp ip nat enable duplex auto speed auto ! interface Vlan1 ip address 192.168.1.x 255.255.255.0 no ip redirects no ip unreachables no ip proxy-arp ip nat enable ! interface Vlan2 ip address 192.168.0.x 255.255.255.0 ! ip forward-protocol nd ip http server ip http access-class 1 ip http authentication local ip http secure-server ! ! ip nat source list 1 interface FastEthernet4 overload ip nat source list 2 interface FastEthernet4 overload ip nat source static tcp 192.168.1.x 1723 interface FastEthernet4 1723 ! ! access-list 1 permit 192.168.0.0 0.0.0.255 access-list 2 permit 192.168.1.0 0.0.0.255 ! ! ! ! control-plane ! ! banner motd Authorized Access only ! line con 0 exec-timeout 15 0 login authentication local_auth line aux 0 exec-timeout 15 0 login authentication local_auth line vty 0 4 access-class 2 in login authentication local_auth length 0 transport input all ! ! end UPDATE 16/12/2012: The only progress that I have been able to make on this issue is that I'm confident that the issue is caused by the GRE tunnels (which are required for the PPTP connection to complete) are being blocked. When attempting a connection, I can see in show ip nat nvi translations that both a TCP translation on 1723 is setup and also a GRE translation is setup also. I appear to be able to see GRE related packets on the LAN that the server is on, so I am lead to believe that the server is sending(?) GRE packets, however running Wireshark on a client PC when attempting a connection shows absolutely no GRE packets. Whilst there are no configuration directives in my config posted above (that I can pin point) which would specifically block them, it would appear that the GRE packets are not being allowed in/out of the router's firewall, even though a NAT translation entry is setup to the server's LAN address. Would anyone be able to provide me with some help to ensure that GRE packets are not blocked by the router's firewall, so that this can be ruled out as a possible issue please?

    Read the article

  • Unusually high dentry cache usage

    - by Wolfgang Stengel
    Problem A CentOS machine with kernel 2.6.32 and 128 GB physical RAM ran into trouble a few days ago. The responsible system administrator tells me that the PHP-FPM application was not responding to requests in a timely manner anymore due to swapping, and having seen in free that almost no memory was left, he chose to reboot the machine. I know that free memory can be a confusing concept on Linux and a reboot perhaps was the wrong thing to do. However, the mentioned administrator blames the PHP application (which I am responsible for) and refuses to investigate further. What I could find out on my own is this: Before the restart, the free memory (incl. buffers and cache) was only a couple of hundred MB. Before the restart, /proc/meminfo reported a Slab memory usage of around 90 GB (yes, GB). After the restart, the free memory was 119 GB, going down to around 100 GB within an hour, as the PHP-FPM workers (about 600 of them) were coming back to life, each of them showing between 30 and 40 MB in the RES column in top (which has been this way for months and is perfectly reasonable given the nature of the PHP application). There is nothing else in the process list that consumes an unusual or noteworthy amount of RAM. After the restart, Slab memory was around 300 MB If have been monitoring the system ever since, and most notably the Slab memory is increasing in a straight line with a rate of about 5 GB per day. Free memory as reported by free and /proc/meminfo decreases at the same rate. Slab is currently at 46 GB. According to slabtop most of it is used for dentry entries: Free memory: free -m total used free shared buffers cached Mem: 129048 76435 52612 0 144 7675 -/+ buffers/cache: 68615 60432 Swap: 8191 0 8191 Meminfo: cat /proc/meminfo MemTotal: 132145324 kB MemFree: 53620068 kB Buffers: 147760 kB Cached: 8239072 kB SwapCached: 0 kB Active: 20300940 kB Inactive: 6512716 kB Active(anon): 18408460 kB Inactive(anon): 24736 kB Active(file): 1892480 kB Inactive(file): 6487980 kB Unevictable: 8608 kB Mlocked: 8608 kB SwapTotal: 8388600 kB SwapFree: 8388600 kB Dirty: 11416 kB Writeback: 0 kB AnonPages: 18436224 kB Mapped: 94536 kB Shmem: 6364 kB Slab: 46240380 kB SReclaimable: 44561644 kB SUnreclaim: 1678736 kB KernelStack: 9336 kB PageTables: 457516 kB NFS_Unstable: 0 kB Bounce: 0 kB WritebackTmp: 0 kB CommitLimit: 72364108 kB Committed_AS: 22305444 kB VmallocTotal: 34359738367 kB VmallocUsed: 480164 kB VmallocChunk: 34290830848 kB HardwareCorrupted: 0 kB AnonHugePages: 12216320 kB HugePages_Total: 2048 HugePages_Free: 2048 HugePages_Rsvd: 0 HugePages_Surp: 0 Hugepagesize: 2048 kB DirectMap4k: 5604 kB DirectMap2M: 2078720 kB DirectMap1G: 132120576 kB Slabtop: slabtop --once Active / Total Objects (% used) : 225920064 / 226193412 (99.9%) Active / Total Slabs (% used) : 11556364 / 11556415 (100.0%) Active / Total Caches (% used) : 110 / 194 (56.7%) Active / Total Size (% used) : 43278793.73K / 43315465.42K (99.9%) Minimum / Average / Maximum Object : 0.02K / 0.19K / 4096.00K OBJS ACTIVE USE OBJ SIZE SLABS OBJ/SLAB CACHE SIZE NAME 221416340 221416039 3% 0.19K 11070817 20 44283268K dentry 1123443 1122739 99% 0.41K 124827 9 499308K fuse_request 1122320 1122180 99% 0.75K 224464 5 897856K fuse_inode 761539 754272 99% 0.20K 40081 19 160324K vm_area_struct 437858 223259 50% 0.10K 11834 37 47336K buffer_head 353353 347519 98% 0.05K 4589 77 18356K anon_vma_chain 325090 324190 99% 0.06K 5510 59 22040K size-64 146272 145422 99% 0.03K 1306 112 5224K size-32 137625 137614 99% 1.02K 45875 3 183500K nfs_inode_cache 128800 118407 91% 0.04K 1400 92 5600K anon_vma 59101 46853 79% 0.55K 8443 7 33772K radix_tree_node 52620 52009 98% 0.12K 1754 30 7016K size-128 19359 19253 99% 0.14K 717 27 2868K sysfs_dir_cache 10240 7746 75% 0.19K 512 20 2048K filp VFS cache pressure: cat /proc/sys/vm/vfs_cache_pressure 125 Swappiness: cat /proc/sys/vm/swappiness 0 I know that unused memory is wasted memory, so this should not necessarily be a bad thing (especially given that 44 GB are shown as SReclaimable). However, apparently the machine experienced problems nonetheless, and I'm afraid the same will happen again in a few days when Slab surpasses 90 GB. Questions I have these questions: Am I correct in thinking that the Slab memory is always physical RAM, and the number is already subtracted from the MemFree value? Is such a high number of dentry entries normal? The PHP application has access to around 1.5 M files, however most of them are archives and not being accessed at all for regular web traffic. What could be an explanation for the fact that the number of cached inodes is much lower than the number of cached dentries, should they not be related somehow? If the system runs into memory trouble, should the kernel not free some of the dentries automatically? What could be a reason that this does not happen? Is there any way to "look into" the dentry cache to see what all this memory is (i.e. what are the paths that are being cached)? Perhaps this points to some kind of memory leak, symlink loop, or indeed to something the PHP application is doing wrong. The PHP application code as well as all asset files are mounted via GlusterFS network file system, could that have something to do with it? Please keep in mind that I can not investigate as root, only as a regular user, and that the administrator refuses to help. He won't even run the typical echo 2 > /proc/sys/vm/drop_caches test to see if the Slab memory is indeed reclaimable. Any insights into what could be going on and how I can investigate any further would be greatly appreciated. Updates Some further diagnostic information: Mounts: cat /proc/self/mounts rootfs / rootfs rw 0 0 proc /proc proc rw,relatime 0 0 sysfs /sys sysfs rw,relatime 0 0 devtmpfs /dev devtmpfs rw,relatime,size=66063000k,nr_inodes=16515750,mode=755 0 0 devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0 tmpfs /dev/shm tmpfs rw,relatime 0 0 /dev/mapper/sysvg-lv_root / ext4 rw,relatime,barrier=1,data=ordered 0 0 /proc/bus/usb /proc/bus/usb usbfs rw,relatime 0 0 /dev/sda1 /boot ext4 rw,relatime,barrier=1,data=ordered 0 0 tmpfs /phptmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 tmpfs /wsdltmp tmpfs rw,noatime,size=1048576k,nr_inodes=15728640,mode=777 0 0 none /proc/sys/fs/binfmt_misc binfmt_misc rw,relatime 0 0 cgroup /cgroup/cpuset cgroup rw,relatime,cpuset 0 0 cgroup /cgroup/cpu cgroup rw,relatime,cpu 0 0 cgroup /cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0 cgroup /cgroup/memory cgroup rw,relatime,memory 0 0 cgroup /cgroup/devices cgroup rw,relatime,devices 0 0 cgroup /cgroup/freezer cgroup rw,relatime,freezer 0 0 cgroup /cgroup/net_cls cgroup rw,relatime,net_cls 0 0 cgroup /cgroup/blkio cgroup rw,relatime,blkio 0 0 /etc/glusterfs/glusterfs-www.vol /var/www fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 /etc/glusterfs/glusterfs-upload.vol /var/upload fuse.glusterfs rw,relatime,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 0 0 sunrpc /var/lib/nfs/rpc_pipefs rpc_pipefs rw,relatime 0 0 172.17.39.78:/www /data/www nfs rw,relatime,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 0 0 Mount info: cat /proc/self/mountinfo 16 21 0:3 / /proc rw,relatime - proc proc rw 17 21 0:0 / /sys rw,relatime - sysfs sysfs rw 18 21 0:5 / /dev rw,relatime - devtmpfs devtmpfs rw,size=66063000k,nr_inodes=16515750,mode=755 19 18 0:11 / /dev/pts rw,relatime - devpts devpts rw,gid=5,mode=620,ptmxmode=000 20 18 0:16 / /dev/shm rw,relatime - tmpfs tmpfs rw 21 1 253:1 / / rw,relatime - ext4 /dev/mapper/sysvg-lv_root rw,barrier=1,data=ordered 22 16 0:15 / /proc/bus/usb rw,relatime - usbfs /proc/bus/usb rw 23 21 8:1 / /boot rw,relatime - ext4 /dev/sda1 rw,barrier=1,data=ordered 24 21 0:17 / /phptmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 25 21 0:18 / /wsdltmp rw,noatime - tmpfs tmpfs rw,size=1048576k,nr_inodes=15728640,mode=777 26 16 0:19 / /proc/sys/fs/binfmt_misc rw,relatime - binfmt_misc none rw 27 21 0:20 / /cgroup/cpuset rw,relatime - cgroup cgroup rw,cpuset 28 21 0:21 / /cgroup/cpu rw,relatime - cgroup cgroup rw,cpu 29 21 0:22 / /cgroup/cpuacct rw,relatime - cgroup cgroup rw,cpuacct 30 21 0:23 / /cgroup/memory rw,relatime - cgroup cgroup rw,memory 31 21 0:24 / /cgroup/devices rw,relatime - cgroup cgroup rw,devices 32 21 0:25 / /cgroup/freezer rw,relatime - cgroup cgroup rw,freezer 33 21 0:26 / /cgroup/net_cls rw,relatime - cgroup cgroup rw,net_cls 34 21 0:27 / /cgroup/blkio rw,relatime - cgroup cgroup rw,blkio 35 21 0:28 / /var/www rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-www.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 36 21 0:29 / /var/upload rw,relatime - fuse.glusterfs /etc/glusterfs/glusterfs-upload.vol rw,user_id=0,group_id=0,default_permissions,allow_other,max_read=131072 37 21 0:30 / /var/lib/nfs/rpc_pipefs rw,relatime - rpc_pipefs sunrpc rw 39 21 0:31 / /data/www rw,relatime - nfs 172.17.39.78:/www rw,vers=3,rsize=65536,wsize=65536,namlen=255,hard,proto=tcp,port=38467,timeo=600,retrans=2,sec=sys,mountaddr=172.17.39.78,mountvers=3,mountport=38465,mountproto=tcp,local_lock=none,addr=172.17.39.78 GlusterFS config: cat /etc/glusterfs/glusterfs-www.vol volume remote1 type protocol/client option transport-type tcp option remote-host 172.17.39.71 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote2 type protocol/client option transport-type tcp option remote-host 172.17.39.72 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote3 type protocol/client option transport-type tcp option remote-host 172.17.39.73 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume remote4 type protocol/client option transport-type tcp option remote-host 172.17.39.74 option ping-timeout 10 option transport.socket.nodelay on # undocumented option for speed # http://gluster.org/pipermail/gluster-users/2009-September/003158.html option remote-subvolume /data/www end-volume volume replicate1 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote1 remote2 end-volume volume replicate2 type cluster/replicate option lookup-unhashed off # off will reduce cpu usage, and network option local-volume-name 'hostname' subvolumes remote3 remote4 end-volume volume distribute type cluster/distribute subvolumes replicate1 replicate2 end-volume volume iocache type performance/io-cache option cache-size 8192MB # default is 32MB subvolumes distribute end-volume volume writeback type performance/write-behind option cache-size 1024MB option window-size 1MB subvolumes iocache end-volume ### Add io-threads for parallel requisitions volume iothreads type performance/io-threads option thread-count 64 # default is 16 subvolumes writeback end-volume volume ra type performance/read-ahead option page-size 2MB option page-count 16 option force-atime-update no subvolumes iothreads end-volume

    Read the article

  • Linux servers going into Halt when pressing Control-D in putty or exit in the shell

    - by Itai Ganot
    Since today at noon, there's a number of Linux CentOS servers which are going to Halt whenever i type exit or use Control-D to close the putty window. Did anyone encounter this weird behavior before? I've checked the aliases list on the servers and there is no alias regarding halt command. After the server came online i've checked the history and saw a "logout" command there but nothing which is related to Halt. At first, i thought it happens only from my computer but later i realized that it happens to everyone which types exit, logout or control+d. 2 of these server are our main iptables firewalls and so it's super critical, your assistance is much appreciated. It looks like that, and it only happens on servers with active IPTables: [root@srv1 bin]# ssh srv2 root@srv2's password: Last login: Sun Nov 11 17:19:41 2012 from 192.168.12.98 [root@srv2 ~]# vim /etc/crontab [root@srv2 ~]# exit logout Broadcast message from root (pts/1) (Tue Nov 13 10:44:04 2012): The system is going down for system halt NOW! Connection to srv2 closed. [root@srv1 bin]# In my troubleshooting steps i came across the command strace, and so i've opened two bash windows to one of the problematic server and i used strace -p PID_of_bash. When i typed in exit in the first shell it did go to halt, attached is the strace output, if you can check it out and tell me if you see anything suspicious i'd be more than thankful. RER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGALRM, {0x4484f0, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGWINCH, {0x448370, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c410, [], SA_RESTORER|SA_RESTART, 0x2b0e45a8f2f0}, 8) = 0 socket(PF_NETLINK, SOCK_RAW, 9) = 3 sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(2)=[{"\25\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"exit\0", 5}], msg_controllen=0, msg_flags=0}, 0) = 21 close(3) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 write(2, "logout\n", 7) = 7 write(2, "There are stopped jobs.\n", 24) = 24 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 pipe([3, 4]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x2b0e45db6fe0) = 23717 setpgid(23717, 23717) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 close(3) = 0 close(4) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [23717]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 wait4(-1, [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WSTOPPED|WCONTINUED, NULL) = 23717 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 ioctl(255, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(255, TIOCGWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, 0x7fff395da984, WNOHANG|WSTOPPED|WCONTINUED, NULL) = 0 rt_sigreturn(0x11) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0 ioctl(0, TIOCGWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 ioctl(0, TIOCSWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 ioctl(0, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig -icanon -echo ...}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [INT QUIT ALRM TSTP TTIN TTOU], [], 8) = 0 rt_sigaction(SIGINT, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTERM, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTERM, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGQUIT, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGALRM, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x4484f0, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGWINCH, {0x47c410, [], SA_RESTORER|SA_RESTART, 0x2b0e45a8f2f0}, {0x448370, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 write(2, "[root@g2-lga ~]# ", 17) = 17 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "e", 1) = 1 write(2, "e", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "x", 1) = 1 write(2, "x", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "i", 1) = 1 write(2, "i", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "t", 1) = 1 write(2, "t", 1) = 1 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 read(0, "\r", 1) = 1 write(2, "\n", 1) = 1 rt_sigprocmask(SIG_BLOCK, [INT], [], 8) = 0 ioctl(0, SNDCTL_TMR_STOP or TCSETSW, {B38400 opost isig icanon echo ...}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTERM, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGQUIT, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGALRM, {0x4484f0, [HUP INT ILL TRAP ABRT BUS FPE USR1 SEGV USR2 PIPE ALRM TERM XCPU XFSZ VTALRM SYS], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c450, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTSTP, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTOU, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGTTIN, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x1, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 rt_sigaction(SIGWINCH, {0x448370, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x47c410, [], SA_RESTORER|SA_RESTART, 0x2b0e45a8f2f0}, 8) = 0 socket(PF_NETLINK, SOCK_RAW, 9) = 3 sendmsg(3, {msg_name(12)={sa_family=AF_NETLINK, pid=0, groups=00000000}, msg_iov(2)=[{"\25\0\0\0d\4\1\0\0\0\0\0\0\0\0\0", 16}, {"exit\0", 5}], msg_controllen=0, msg_flags=0}, 0) = 21 close(3) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 write(2, "logout\n", 7) = 7 open("/root/.bash_logout", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0644, st_size=24, ...}) = 0 read(3, "# ~/.bash_logout\n\nclear\n", 24) = 24 close(3) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 rt_sigprocmask(SIG_BLOCK, NULL, [], 8) = 0 stat(".", {st_mode=S_IFDIR|0750, st_size=12288, ...}) = 0 stat("/usr/kerberos/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/kerberos/bin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/local/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/local/bin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/bin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/sbin/clear", 0x7fff395da960) = -1 ENOENT (No such file or directory) stat("/usr/bin/clear", {st_mode=S_IFREG|0755, st_size=12712, ...}) = 0 access("/usr/bin/clear", X_OK) = 0 access("/usr/bin/clear", R_OK) = 0 stat("/usr/bin/clear", {st_mode=S_IFREG|0755, st_size=12712, ...}) = 0 access("/usr/bin/clear", X_OK) = 0 access("/usr/bin/clear", R_OK) = 0 rt_sigprocmask(SIG_BLOCK, [INT CHLD], [], 8) = 0 pipe([3, 4]) = 0 clone(child_stack=0, flags=CLONE_CHILD_CLEARTID|CLONE_CHILD_SETTID|SIGCHLD, child_tidptr=0x2b0e45db6fe0) = 23726 setpgid(23726, 23726) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 close(3) = 0 close(4) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [23726]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD], [], 8) = 0 wait4(-1, Broadcast message from root (pts/0) (Wed Nov 14 12:41:44 2012): The system is going down for system halt NOW! [{WIFEXITED(s) && WEXITSTATUS(s) == 0}], WSTOPPED|WCONTINUED, NULL) = 23726 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [CHLD], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [CHLD], NULL, 8) = 0 ioctl(255, SNDCTL_TMR_TIMEBASE or TCGETS, {B38400 opost isig icanon echo ...}) = 0 ioctl(255, TIOCGWINSZ, {ws_row=53, ws_col=211, ws_xpixel=0, ws_ypixel=0}) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, 0x7fff395da634, WNOHANG|WSTOPPED|WCONTINUED, NULL) = 0 rt_sigreturn(0x11) = 0 open("/etc/bash.bash_logout", O_RDONLY) = -1 ENOENT (No such file or directory) rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 rt_sigaction(SIGINT, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, {0x448700, [], SA_RESTORER, 0x2b0e45a8f2f0}, 8) = 0 stat("/root/.bash_history", {st_mode=S_IFREG|0600, st_size=28900, ...}) = 0 open("/root/.bash_history", O_WRONLY|O_APPEND) = 3 write(3, "cd /etc/profile.d/\nls\nls -alrt\ng"..., 1120) = 1120 close(3) = 0 open("/root/.bash_history", O_RDONLY) = 3 fstat(3, {st_mode=S_IFREG|0600, st_size=30020, ...}) = 0 read(3, "history \nping g1-lga\nping g1-lga"..., 30020) = 30020 close(3) = 0 open("/root/.bash_history", O_WRONLY|O_TRUNC) = 3 write(3, "grep \"216.18\" *\nhistory \nexit\nvi"..., 27609) = 27609 close(3) = 0 kill(4294965658, SIGTERM) = 0 kill(4294965658, SIGCONT) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, [{WIFSIGNALED(s) && WTERMSIG(s) == SIGTERM}], WNOHANG|WSTOPPED|WCONTINUED, NULL) = 1638 wait4(-1, 0x7fff395dac34, WNOHANG|WSTOPPED|WCONTINUED, NULL) = -1 ECHILD (No child processes) rt_sigreturn(0x11) = 0 --- SIGCHLD (Child exited) @ 0 (0) --- wait4(-1, 0x7fff395dac34, WNOHANG|WSTOPPED|WCONTINUED, NULL) = -1 ECHILD (No child processes) rt_sigreturn(0xffffffffffffffff) = 0 rt_sigprocmask(SIG_BLOCK, [CHLD TSTP TTIN TTOU], [], 8) = 0 ioctl(255, TIOCSPGRP, [20458]) = 0 rt_sigprocmask(SIG_SETMASK, [], NULL, 8) = 0 setpgid(0, 20458) = -1 EPERM (Operation not permitted) exit_group(1) = ? Process 20458 detached [root@g2-lga ~]#

    Read the article

  • How can I avoid Windows 8.1 resetting my font size?

    - by Michael Tsang
    I am using Windows 8.1 on my laptop, which has a 15.6" screen with resolution 1366x768. I measured the screen with a ruler and calculated its DPI, which is 101. Therefore, I have set the scaling to 105%. However, when I change to an external monitor, which is a huge one with resolution 1920x1080 and DPI 93, I need to change the scaling to 97% but when I change the DPI back and forth, my font sizes have get resetted. I prefer using font sizes 14 on my title bars, message boxes and icons and font sizes 13 on my palette titles, menus and tooltips. However, as my laptop screen is too small, in order to make my apps fit on screen, I use font sizes 12 on my title bars, message boxes and icons and font sizes 11 on my palette titles, menus and tooltips. I don't know why I can't resize the window to make it larger than my screen in Windows (but it is possible in Kubuntu), therefore, some parts of my apps cannot be shown with my preferred font size. I have tried changing both the DPI and the font size by using .reg files. Before switching to my laptop screen, I apply the following: Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Control Panel\Desktop] "LogPixels"=dword:00000065 [HKEY_CURRENT_USER\Control Panel\Desktop\WindowMetrics] "CaptionFont"=hex:ef,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,bc,02,00,00,\ 00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "SmCaptionFont"=hex:f0,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,bc,02,00,\ 00,00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "MenuFont"=hex:f0,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,00,\ 00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "StatusFont"=hex:f0,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,\ 00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "MessageFont"=hex:ef,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,\ 00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "IconFont"=hex:ef,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,00,\ 00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "AppliedDPI"=dword:00000065 Before switching to my external display, I apply this: Windows Registry Editor Version 5.00 [HKEY_CURRENT_USER\Control Panel\Desktop] "LogPixels"=dword:0000005d [HKEY_CURRENT_USER\Control Panel\Desktop\WindowMetrics] "CaptionFont"=hex:ed,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,bc,02,00,00,\ 00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "SmCaptionFont"=hex:ee,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,bc,02,00,\ 00,00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "MenuFont"=hex:ef,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,00,\ 00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "StatusFont"=hex:ef,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,\ 00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "MessageFont"=hex:ed,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,\ 00,00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "IconFont"=hex:ed,ff,ff,ff,00,00,00,00,00,00,00,00,00,00,00,00,90,01,00,00,00,\ 00,00,01,00,00,05,00,53,00,65,00,67,00,6f,00,65,00,20,00,55,00,49,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,\ 00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00,00 "AppliedDPI"=dword:0000005d I expect after applying the file, the DPI settings and the font sizes take effect at the next sign in. However, on my laptop screen, after I applied the file, signed out and in, the DPI setting changed, but the font sizes were resetted to tiny, and I had to apply the same file, signed out and in again to get the correct font size. The situation is even worse on my external monitor. After I applied the file, signed out and in, both the DPI setting and the font sizes were resetted to their default values, which were 96 DPI (the physical DPI as measured by dividing the resolution by the physical size is 93) and font size 9, which is totally unacceptable. How can I write the .reg files such that the settings can be correctly applied with a single sign in?

    Read the article

  • Why is Varnish not caching?

    - by Justin
    I am troubleshooting the setup of Varnish 3.x on my Ubuntu server. I'm running Drupal 7 on two sites set up on the box, via named-based vhosts. Before trying to get Varnish to play nice with Drupal I'm trying to just get Varnish to a PNG from cache. Here are the headers I get from a curl -I request of the PNG file: HTTP/1.1 200 OK Server: Apache/2.2.22 (Ubuntu) Last-Modified: Sun, 07 Oct 2012 21:18:59 GMT ETag: "a57c2-3850-4cb7ea73db6c0" Accept-Ranges: bytes Content-Length: 14416 Cache-Control: max-age=1209600 Expires: Thu, 25 Oct 2012 22:55:14 GMT Content-Type: image/png Accept-Ranges: bytes Date: Thu, 11 Oct 2012 22:55:14 GMT X-Varnish: 1766703058 Age: 0 Via: 1.1 varnish Connection: keep-alive X-Varnish-Cache: MISS Here is the Varnish VCL file I'm using (It's a default VCL configuration designed for Drupal): # Default backend definition. Set this to point to your content # server. # backend default { .host = "127.0.0.1"; .port = "8080"; } # Respond to incoming requests. sub vcl_recv { # Use anonymous, cached pages if all backends are down. if (!req.backend.healthy) { unset req.http.Cookie; } # Allow the backend to serve up stale content if it is responding slowly. set req.grace = 6h; # Pipe these paths directly to Apache for streaming. #if (req.url ~ "^/admin/content/backup_migrate/export") { # return (pipe); #} # Do not cache these paths. if (req.url ~ "^/status\.php$" || req.url ~ "^/update\.php$" || req.url ~ "^/admin$" || req.url ~ "^/admin/.*$" || req.url ~ "^/flag/.*$" || req.url ~ "^.*/ajax/.*$" || req.url ~ "^.*/ahah/.*$") { return (pass); } # Do not allow outside access to cron.php or install.php. #if (req.url ~ "^/(cron|install)\.php$" && !client.ip ~ internal) { # Have Varnish throw the error directly. # error 404 "Page not found."; # Use a custom error page that you've defined in Drupal at the path "404". # set req.url = "/404"; #} # Always cache the following file types for all users. This list of extensions # appears twice, once here and again in vcl_fetch so make sure you edit both # and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset req.http.Cookie; } # Remove all cookies that Drupal doesn't need to know about. We explicitly # list the ones that Drupal does need, the SESS and NO_CACHE. If, after # running this code we find that either of these two cookies remains, we # will pass as the page cannot be cached. if (req.http.Cookie) { # 1. Append a semi-colon to the front of the cookie string. # 2. Remove all spaces that appear after semi-colons. # 3. Match the cookies we want to keep, adding the space we removed # previously back. (\1) is first matching group in the regsuball. # 4. Remove all other cookies, identifying them by the fact that they have # no space after the preceding semi-colon. # 5. Remove all spaces and semi-colons from the beginning and end of the # cookie string. set req.http.Cookie = ";" + req.http.Cookie; set req.http.Cookie = regsuball(req.http.Cookie, "; +", ";"); set req.http.Cookie = regsuball(req.http.Cookie, ";(SESS[a-z0-9]+|SSESS[a-z0-9]+|NO_CACHE)=", "; \1="); set req.http.Cookie = regsuball(req.http.Cookie, ";[^ ][^;]*", ""); set req.http.Cookie = regsuball(req.http.Cookie, "^[; ]+|[; ]+$", ""); if (req.http.Cookie == "") { # If there are no remaining cookies, remove the cookie header. If there # aren't any cookie headers, Varnish's default behavior will be to cache # the page. unset req.http.Cookie; } else { # If there is any cookies left (a session or NO_CACHE cookie), do not # cache the page. Pass it on to Apache directly. return (pass); } } } # Set a header to track a cache HIT/MISS. sub vcl_deliver { if (obj.hits > 0) { set resp.http.X-Varnish-Cache = "HIT"; } else { set resp.http.X-Varnish-Cache = "MISS"; } } # Code determining what to do when serving items from the Apache servers. # beresp == Back-end response from the web server. sub vcl_fetch { # We need this to cache 404s, 301s, 500s. Otherwise, depending on backend but # definitely in Drupal's case these responses are not cacheable by default. if (beresp.status == 404 || beresp.status == 301 || beresp.status == 500) { set beresp.ttl = 10m; } # Don't allow static files to set cookies. # (?i) denotes case insensitive in PCRE (perl compatible regular expressions). # This list of extensions appears twice, once here and again in vcl_recv so # make sure you edit both and keep them equal. if (req.url ~ "(?i)\.(pdf|asc|dat|txt|doc|xls|ppt|tgz|csv|png|gif|jpeg|jpg|ico|swf|css|js)(\?.*)?$") { unset beresp.http.set-cookie; } # Allow items to be stale if needed. set beresp.grace = 6h; } # In the event of an error, show friendlier messages. sub vcl_error { # Redirect to some other URL in the case of a homepage failure. #if (req.url ~ "^/?$") { # set obj.status = 302; # set obj.http.Location = "http://backup.example.com/"; #} # Otherwise redirect to the homepage, which will likely be in the cache. set obj.http.Content-Type = "text/html; charset=utf-8"; synthetic {" <html> <head> <title>Page Unavailable</title> <style> body { background: #303030; text-align: center; color: white; } #page { border: 1px solid #CCC; width: 500px; margin: 100px auto 0; padding: 30px; background: #323232; } a, a:link, a:visited { color: #CCC; } .error { color: #222; } </style> </head> <body onload="setTimeout(function() { window.location = '/' }, 5000)"> <div id="page"> <h1 class="title">Page Unavailable</h1> <p>The page you requested is temporarily unavailable.</p> <p>We're redirecting you to the <a href="/">homepage</a> in 5 seconds.</p> <div class="error">(Error "} + obj.status + " " + obj.response + {")</div> </div> </body> </html> "}; return (deliver); } I'm getting a MISS and age 0 every time. If I'm understanding correctly, this means the file isn't being returned from Varnish's cache. Is there a problem with my Varnish config?

    Read the article

  • SOA PARTNER COMMUNITY NEWSLETTER JULY 2012

    - by mseika
    SOA PARTNER COMMUNITY NEWSLETTER JULY 2012 Dear SOA partner community member To provide our community members the best of our knowledge, we want your feedback on our SOA Partner community. Thus we are organizing SOA Partner Community Survey 2012. We request you to participate in the survey and give your valuable feedback on various areas of marketing, sales and education. To continue our successful BPM Suite, Oracle is launching together with you Process Accelerators initiative. It’s your opportunity to co-develop and market predefined processes. Oracle Fusion Applications Design Patterns are a great tool to develop your SOA or BPM solution or process accelerators. To promote your SOA & BPM Specialization we continue to offer several benefits. This month we would like to highlight our Specialization Plaques - make sure you request one for your office! Our Fusion Middleware Summer Camps are booked out, if could not get a seat you can attend the SOA & BPM track @ Virtual Developer Day: Oracle Fusion Development Oracle demo systems offer´s two new demos: Business Driven Development based on BPM Suite & SOA Lifecycle Management. Jürgen KressOracle SOA & BPM Partner Adoption EMEA NEW CONTENT Community SurveyProcess Accelerators KitPlaques SOA & BPM SpecializedSOA & BPM at Virtual Developer Day News from our Partners & CommunityOverview of SOA Diagnostics in 11.1.1.6 Business driven development(BDD) demo now available! SOA Lifecycle Management Oracle Fusion applications design patterns Updated material by Oracle Connect and Network SOA Blogs SOA on Facebook SOA on LinkedIn SOA on Twitter Mix SOA Forum COMMUNITY SURVEY Like every year we would like to get your feedback in our SOA Partner Community Survey 2012. Make sure that You attend to further develop our community and support our planning! It is key for us to get your feedback to prepare for the next fiscal year. Back to top PROCESS ACCELERATORS KIT Oracle is very interested to co-develop and market with you, our partners, pre-defined processes for BPM Suite.I am very happy to announce a new program called “Oracle BPM Partner Solution Catalog”. This program will provide a one-stop shop for our customers looking for Oracle BPM partner solutions available in the market today.The Oracle BPM Solution Catalog will be hosted on our very popular Oracle Technology Network (OTN). To give you an idea of the scale of customer visibility, OTN today receives over 1Million hits per day from our business and developer community. We would like to invite you to list your Oracle BPM 11g solutions available today.In order to participate in this program, you need to do the following: Fill in the attached slide templates - #3 and #4 for each Oracle BPM 11g solution you would like to list on OTN.Please add links to whitepapers , videos, references to the specific solution in the template slide. We recommend that you create a landing page on your website for these linked artifacts and just point to the same from within the PowerPoint template. This will give you the flexibility to update the information as frequently as needed. If you have the particular solution in production or a reference available, please list them as well. Send the PowerPoint template slides (1 set of slides for each Oracle BPM solution) to [email protected]. In addition to having the opportunity to list your solutions on OTN for Oracle customers, you will have the chance to advertise your new wins/implementations/solutions in an Oracle Sponsored PM Webinar held every quarter. This program is targeted to go live by the end of summer 2012. At this point, we are targeting a soft launch in July end 2012 so send on your BPM solutions information as soon as possible. We would love to have your solution(s) listed in the “Oracle BPM Partner Solution Catalog” at the time of the launch. This will be a live repository so you can keep adding more solutions as they become available. If you have any questions, please feel free to contact us [email protected], Product Strategy Director, Oracle BPM , Phone +1 650.506.5486.Thank you and look forward to hearing from you. Oracle BPM team Process Accelerators Overview.pdf ProcessAcceleratorsDataSheet.pdf Demos draUPK.zip & trmUPK.zip BPM Solution repository slides.ppt Additional BPM material BPM Process Development Lifecycle Document that describes recommended approach to collaborative process modeling across business and IT tools ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization) by Andrejus Baranovskis BPMN process editor problems in 11.1.1.6 by Mark Nelson BPM – Disable DBMS job to refresh B2B Materialized View by Mark Nelson For the complete kit please visit the BPM folder at our SOA Community Workspace (SOA Community membership required). For the complete presentation please visit our SOA Community Workspace (SOA Community membership required). Information is Oracle and Partner confidential! Back to top PLAQUES SOA & BPM SPECIALIZED We continue to offer you a nice SOA & BPM Specialization plaque with your logo to proof your success. If you are a SOA or BPM Specialized partner and would like to request the plaque please send Brigitte an e-mail with the following information: Partner Name Partner logo (preferred eps file) Partner Status gold or platinum Your shipping address Your Specialization: SOA or BPM We recommend to mount the plaque at your office reception in addition you can use the SOA Specialization logos at your website download Logo: Gold & Platinum or the BPM logos Gold & Platinum Back to top SOA & BPM AT VIRTUAL DEVELOPER DAY Register now for this FREE hands-on online workshop Get up to date and learn everything you wanted to know about Oracle ADF & Fusion Development plus live Q&A chats with Oracle technical staffOracle Application Development Framework (ADF) is the standards based, strategic framework for Oracle Fusion Applications and Oracle Fusion Middleware. Oracle ADF’s integration with the Oracle SOA Suite, Oracle WebCenter and Oracle BI creates a complete productive development platform for your custom applications.Join us at this FREE virtual event and learn the latest in Fusion Development including: Is Oracle ADF development faster and simpler than Forms, Apex or .Net? Mobile Application Development with ADF Mobile Oracle ADF development with Eclipse Oracle WebCenter Portal and ADF Development Application Lifecycle Management with ADF Building Process Centric Applications with ADF and BPM Oracle Business Intelligence and ADF Integration Live Q&A chats with Oracle technical staff Developer lead, manager or architect - this event has something for everyone. Don’t miss this opportunity.Tuesday, July 10, 2012. 9:00 a.m. PT -1:00 p.m. PT 11:00 a.m. CT - 3:00 p.m. CT 12:00 p.m. ET - 4:00 p.m. ET 1:00 p.m. BRT - 5:00 p.m. BRT Register online now! for this FREE event. Agenda: 09:00 am Opening 09:30 am Keynote: Oracle Fusion Development Track1Introduction to Fusion Development Track2What's New in Fusion Development Track3Fusion Development in the Enterprise 10:00 am Is Oracle ADF Development Faster and Simpler than Oracle Forms, APEX or .Net? Mobile Application Development with ADF Mobile Oracle WebCenter Portal and ADF Development 11:00 am Rich Web UI made simple - an ADF Faces Overview Oracle Enterprise Pack for Eclipse - ADF Development Building Process Centric Applications with ADF and BPM 12:00 noon Next Generation Controller for JSF Application Lifecycle Management for ADF Oracle Business Intelligence and ADF Integration *Hands On Lab – WebCenter and ADF Lab w/ JDeveloper - Lab materials will be provided ahead of the event to give you ample time to work through the lab and increase the productivity of the live chat sessions the day of the event. Sessions abstractsRegister online now! for this FREE event Read more on Community Events and post your comment here. Back to top NEWS FROM OUR PARTNERS AND COMMUNITY Send your tweets @soacommunity #soacommunity and follow us at http://twitter.com/soacommunity JDeveloper & ADF?Troubleshooting BPMN process editor problems in 11.1.1.6http://dlvr.it/1p0FfS SOA Community?SOA & BPM @ Virtual Developer Day: Oracle Fusion Development - July 10th 2012https://soacommunity.wordpress.com/2012/07/02/soa-bpm-virtual-developer-day- oracle-fusion-developmentjuly-10th-2012/#soacommunity #soa #bom #education orclateamsoa ?A-Team Blog #ateam: BAM design pointers - In working recently with a large Oracle customer on SOA and BAM, I discove.http://ow.ly/1kYqES SOA CommunitySOA Community Newsletter June 2012http://wp.me/p10C8u-qw SOA CommunityBPMN process editor problems in 11.1.1.6 by Mark Nelsonhttp://redstack.wordpress.com/2012/06/27/ bpmn-process-editor-problems-in-11-1-1-6 #soacommunity #bpm OTNArchBeat ?SOA Learning Library: free short, topic-focused training on Oracle SOA & BPM products | @SOACommunity http://pub.vitrue.com/NE1G Andrejus Baranovskis ?ADF 11g PS5 Application with Customized BPM Worklist Task Flow (MDS Seeded Customization)http://fb.me/1coX4r1X1 SOA CommunitySOA Learning Library provides a comprehensive curriculum for the SOA and BPM product suites https://soacommunity.wordpress.com/2012/06/27/soa-learning-library #soacommunity #soa #bpm OTNArchBeat ?A Universal JMX Client for Weblogic - Part 1: Monitoring BPEL Thread Pools in SOA 11g | Stefan Koserhttp://pub.vitrue.com/mQVZ OTNArchBeat ?BPM - Disable DBMS job to refresh B2B Materialized View | Mark Nelson http://pub.vitrue.com/3PR0Oracle SOA ?Learn how Choice Hotels Implements Innovative Google Maps Solution with #OracleSOA http://bit.ly/MTwIJ3 SOA Communitytop Tweets SOA Partner Community - June 2012 Send your tweets @soacommunity #soacommunity https://soacommunity.wordpress.com/2012/06/25/top-tweets-soa-partner-community-june-2012 Torsten Winterberg#OPITZ is pushing Oracle commitment to the next level: New Specializations done: ADF, BPM, WLS, Exadatahttp://bit.ly/KX1WVS ServiceTechSymposium ?Only 8 more days left until Super Early Bird Registration Discount expires! http://www.servicetechsymposium.com OracleBlogsSOA Management in 3 minutes - Video explainerhttp://ow.ly/1kN5pn SOA Community ?SOA, Cloud & Service Technology Symposium 2012 London - Enter Promo Code: Djmxz370https://soacommunity.wordpress.com/2012/06/22/soa-cloud-service-technology-symposium-2012-london #soasymposium #soacommunity #soa Heidi BuelowGreat course! w David Read RT @soacommunity: product management ADF for BPM training 5 seats left https://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity SOA Community ?product management ADF for BPM training 5 seats lefthttps://soacommunity.wordpress.com/2012/06/12/fusion-middleware-summer-campsadvanced-partner-trainings/ #bpm #soacommunity OTNArchBeat ?Oacle Fusion Applications Design Patterns Now Available For Developers | Ultan O'Broinhttp://pub.vitrue.com/UEiF OTNArchBeat ?SOA, Cloud & Service Technology Symposium 2012London - Special Oracle Discounthttp://pub.vitrue.com/8E0J SOA CommunityBecome a facebook fan of soacommunity http://www.facebook.com/soacommunity #soacommunity SOA Community ?SOA Suite HealthCare Integration Architecture https://blogs.oracle.com/SOAForHealthcare/entry/soa_suite_healthcare_integration_architecture #soacommunity #soa Andrejus Baranovskis ?Running Pre-built Virtual Machine for SOA Suite and BPM Suite 11g PS5 on Mac OS X Snow Leopard (10.6http://fb.me/vB8nO0Vg OracleBlogsPrinciples of Service-Oriented Architecture by Douwe P. van den Bos http://ow.ly/1kIcOP OTNArchBeatOracle Public Cloud Architecture | @TylerJewell http://ow.ly/bHAcL The SOA Network ?Business Process Management, Service-Oriented Architecture, and Web 2.0: Business Transformation or.http://bit.ly/LBgREL #ITNews #SOA OracleBlogs ?Oracle SOA Foundation Practitioner Certificationhttp://ow.ly/1kGYYg Frank Nimphius ?Learn Advanced ADF. ORACLE Fusion Middleware Summer Camps in Lisbon - July 9th - 13thhttp://bit.ly/KGCl3i SOA CommunityTransform Your Application Integration with Best Practices from Oracle Customershttps://blogs.oracle.com/SOA/entry/transform_your_application_integration_with #soacommunity #soa #bpm Simone GeibWhat you always wanted to know about #oraclesoa diagnostics: Shawn Bailey, Overview of SOA Diagnostics in 11.1.1.6,http://ow.ly/bxK0M Oracle SOA ?Save the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to #oraclesoa http://bit.ly/LsNDGl OTNArchBeat ?New VirtualBox images for Oracle SOA Suite & Oracle BPM Suite 11.1.1.6.0http://ow.ly/bwDAl OracleBlogs ?Process development lifecycle in Oracle BPM 11g http://ow.ly/1ktesY Daniel AmadeiNew post: Oracle BPEL 11g Message Delivery & Recovery.http://amadei.com.br/blog/index.php /oracle-bpel-11g-message-delivery SOA Community ?Sending out the June edition of the #soacommunity newsletter - read it or become a member http://www.oracle.com/goto/emea/soa!#soa #bpm Arun Pareek ?For the past six months Ahmed Aboulnaga and me have been working on Oracle SOA Suite 11g Administrator's Handbook.http://lnkd.in/CAvpUQ SOA CommunitySun shine all day no clouds - solar eclipse is over... #sunshine #cloud http://www.infoq.com/presentations/Swarm-Computing Michel SchildmeijerWatch my blog Oracle Service Bus 11g: listing projects and services with WLST - part 1 http://lnkd.in/B7f3GQ @TITAN_GS @wlscommunity OTNArchBeatBook Review: Oracle Application Integration Architecture (AIA) Foundation Pack 11gR1: Essentials | Rajesh Rahejahttp://ow.ly/bn2cc OTNArchBeat ?Driving from Business Architecture to Business Process Services | @vghariharan http://ow.ly/bn5UB OTNArchBeat ?SOA Analysis within the Department of Defense Architecture Framework (DoDAF) 2.0 - Part II | Dawit Lessanu http://ow.ly/bn6sX Simone Geib ?Contact me directly for ideas how to improvehttp://bit.ly/advancedsoasuite and additional posts, presentations, white papers, ... #soasuite Simone Geib ?#soasuite advanced OTN page has become too cluttered. Broke it into separate pages to start with. http://bit.ly/advancedsoasuite OracleBlogs ?June Webcast: SOA Gateway Implementation and Troubleshooting (2 sessions) http://ow.ly/1kbRFA ServiceTechSymposium ?New session just posted to calendar: "NoSQL for Data Services, Data Virtualization & Big Data" by Guido Schmutz, Trivadis AG ://ow.ly/bjjOeDebra Lilley ?looks good - real proof people are using the apps ! RT @fteter: Very cool Fusion Applications Help site: http://bit.ly/L3nvOR #FusionApps demed ?rapid proliferation of cloud computing will drive convergence of SOA and cloud paradigms" http://ovum.com/2012/05/18/soa-paves-the-way-for-cloud/ SOA CommunityMiddleware Oracle Excellence Awards 2012-HAPPY NEW YEAR! https://soacommunity.wordpress.com/2012/05/31/middleware-oracle-excellence-awards-2012happy-new-year/ #soacommunity #opn #opnaward #specialization #oracle SOA CommunityHappy New Year #soacommunity thanks for the business! Time for a drink http://pic.twitter.com/zkK08KWB OTNArchBeat ?Who should ‘own’ the Enterprise Architecture? | Michael Glas http://bit.ly/K0ge0Q SOA Communitytop Tweets SOA Partner Community &ndash; May 2012 http://wp.me/p10C8u-pP ServiceTechSymposiumNew session just posted to Symposium calendar: "Elastic SOA in the Cloud" by Steve Millidge, C2B2 Consulting http://www.servicetechsymposium.com/agenda2012.php #elastic_soa_in_the_cloud orclateamsoa ?A-Team Blog #ateam: How to Set JVM Parameters in Oracle SOA 11Ghttp://ow.ly/1k2cnl ServiceTechSymposium ?New session just posted to Symposium calendar: "SOA Governance at EDP: A Global Energy Company" by Manuel Rosa, Linkhttp://www.servicetechsymposium.com/agenda2012.php#soa_governance_at_edp SOA Community ?VirtualBox image SOA Suite & BPM Suite 11.1.1.6.0&ndash;Your feedback?http://wp.me/p10C8u-qh Oracle MiddlewareSave the date: Jun 21 10AM, SOA & BPM Customer Insight Series. Hear how Choice Hotels went from legacy to#oraclesoa http://bit.ly/LU1y5N OTNArchBeat ?Goodbye, Silos. Hello SOA. | @stephanieoverbyhttp://pub.vitrue.com/NJJO SOA CommunityBPM Standard Edition - to start your BPM project http://wp.me/p10C8u-qj Please feel free to send us your news! And add your blog to our SOA blog wiki. Back to top OVERVIEW OF SOA DIAGNOSTICS IN 11.1.1.6 What tools are available for diagnosing SOA Suite issues? There are a variety of tools available to help you and Support diagnose SOA Suite issues in 11g but it can be confusing as to which tool is appropriate for a particular situation and what their relationships are. This blog post will introduce the various tools and attempt to clarify what each is for and how they are related. Let's first list the tools we'll be addressing: RDA: Remote Diagnostic Agent DFW: Diagnostic Framework Selective Tracing DMS: Dynamic Monitoring Service ODL: Oracle Diagnostic Logging ADR: Automatic Diagnostics Repository ADRCI: Automatic Diagnostics Repository Command Interpreter WLDF: WebLogic Diagnostic Framework This overview is not mean to be a comprehensive guide on using all of these tools, however, extensive reference materials are included that will provide many more details on their execution. Another point to note is that all of these tools are applicable for Fusion Middleware as a whole but specific products may or may not have implemented features to leverage them. A couple of the tools have a WebLogic Scripting Tool or 'WLST' interface. WLST is a command interface for executing pre-built functions and custom scripts against a domain. A detailed WLST tutorial is beyond the scope of this post but you can find general information here. There are more specific resources in the below sections.In this post when we refer to 'Enterprise Manager' or 'EM' we are referring to Enterprise Manager Fusion Middleware Control. read the full blog post here. Read more on Oracle and post your comment here. Back to top BUSINESS DRIVEN DEVELOPMENT (BDD) DEMO NOW AVAILABLE! For access to the Oracle demo systems please visit OPN and talk to your Partner Expert DSS is pleased to announce the availability of the demo “Business Driven Development“. This innovative demonstration uses a case-study approach to show business users how they can easily streamline their Business Processes - delivering greater efficiency, agility, visibility and collaboration with Oracle BPM and WebCenter. The BDD demonstration uses a case study-based approach to highlight a business problem at a fictional company, Avitek Corporation, and uses Oracle BPM and Oracle WebCenter to solve the business problem. This holistic approach has specifically been used to appeal to a non-technical business analyst user. This demo is NOT focused on product features, but aims to guide users through a complete BPM lifecycle. The scenario is based on improving a simple order process (scenario details are in the demo script). Avitek Corporation is sufferinng from a manual email-driven ordering process. Sales reps don’t know where the customer orders are stuck (no visibility) and finance users are unable to manually approve every order (no automation). There are several areas where this process can be improved with Business Process Management technology. This demo shows how improving following areas will ignificantly help resolve the business problems Avitek Corporation is facing. Areas for improvement include: Utilizing BPM for process management, rather than an unregulated, email-based process. Utilizing automated services, rather than requiring a human to key into a system. For example, Finance checking the customer’s credit rating is something that could be automated. Centralizing business rules that can be integrated into a business process, rather than requiring a human to process them. For example, Finance must determine when orders can be automatically approved. Provide insight and visibility into the process. For example, Sales Rep needs to know the status of their customer’s orders. The BDD Demo uses the following products. Oracle BPM Suite 11g PS4FP Oracle WebCenter 11g PS4FP (for Process Spaces) Oracle Business Activity Monitoring 11g Oracle Database 11g Back to top SOA LIFECYCLE MANAGEMENT For access to the Oracle demo systems please visit OPN and talk to your Partner Expert We are pleased to announce the availability of the SOA Management demo that showcases some of the key provisioning and lifecycle management capabilities of SOA Management Pack Enterprise Edition (EE). This demo specifically focuses on some of the lifecycle management solutions for Oracle SOA Suite and Oracle Service Bus (OSB). Demo Highlights The demo showcases the following capabilities. Provisioning of SOA Composites Provisioning of OSB Projects Provision SOA and OSB artifacts in a future maintenance window Back to top ORACLE FUSION APPLICATIONS DESIGN PATTERNS The Oracle Fusion Applications user experience design patterns are published! These new, reusable usability solutions and best-practices, which will join the Oracle dashboard patterns and guidelines that are already available online, are used by Oracle to artfully bring to life a new standard in the user experience, or UX, of enterprise applications. Now, the Oracle applications development community can benefit from the science behind the Oracle Fusion Applications user experience, too. These Oracle Fusion Applications UX Design Patterns, or blueprints, enable Oracle applications developers and system implementers everywhere to leverage professional usability insight when: tailoring an Oracle Fusion application, creating coexistence solutions that existing users will be delighted with, thus enabling graceful user transitions to Oracle Fusion Applications down the road, or designing exciting, new, highly usable applications in the cloud or on-premise. Based on the Oracle Application Development Framework (ADF) components, the Oracle Fusion Applications patterns and guidelines are proven with real users and in the Applications UX usability labs, so you can get right to work coding productivity-enhancing designs that provide an advantage for your entire business. What’s the best way to get started? We’ve made that easy, too. The Design Filter Tool (DeFT) selects the best pattern for your user type and task. Simply adapt your selection for your own task flow and content, and you’re on your way to a really great applications user experience. More Oracle applications design patterns and training are coming your way in the future. To provide feedback on the sets that are currently available, let me know in the comments! Read more on Fusionapps and post your comment here. Back to top UPDATED ORACLE MATERIAL Integrated SOA Gateway Documentation - Implementation Guide | Developer’s Guide Webcast Series: Oracle’s SOA and Oracle Business Process Management Solutions (Choice Hotels, Eaton, Farmers Insurance) BAM design pointers By Kavitha Srinivasan Seeking Oracle Fusion Middleware Go Live StoriesOracle Fusion Middleware product management is looking for recent go live stories to share with the Oracle sales team, sales consulting, product management and other internal groups. Customer contact details may remain anonymous. Your successful implementation will be featured in a quarterly report. The chance to present on an internal webcast is also available. Contact Maria Forney ([email protected]) if you have a noteworthy implementation success story. This is a good opportunity for partners interested in showcasing Oracle Fusion Middleware implementations, and gaining more exposure within Oracle. Performance tuning resources. All in one: docs, blogs, WPs, ppts: http://bit.ly/soa_resources Back to top HAVE YOU MISSED OUR LAST SOA PARTNER COMMUNITY WEBCASTS? UPK Webcast Business Driven Application Management & BPM11g & Application Grid & GoldenGate & Fusion Middleware Pricing & OC4J to WebLogic & Next Generation SOA & Fusion Middleware in Utility & Fusion Middleware in Communications & Fusion Middleware in Public Services & Fusion Middleware in Financial Services Please check your local OPN trainings calendar for additional training dates and locations. Back to top SOA PARTNER COMMUNITY CALENDAR On-Demand Trainings Event Name Language Type SOA Virtual Developers Day English Tech In-Class Trainings Date Event name Location / Country Contact person Type 09-13.07.2012 BPM Suite 11g advanced training by David Read Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 ADF 11g advanced training by Grant Ronald and Frank Nimphius Lisbon, Portugal Jürgen Kress Tech 09-13.07.2012 WebCenter Portal advanced training by Stefan Krantz and Angelo Santagata Lisbon, Portugal Jürgen Kress Tech 10.07.2012 Fusion Middleware Virtual Developer Day Online OTN Tech 10- 12.07.2012 WebLogic 12c training by Cosmin Tudor Lisbon, Portugal Jürgen Kress Tech 16-18.07.2012 SOA Suite 11g advanced training by Niall Commiskey Munich, Germany Jürgen Kress Tech 16-18.07.2012 ADF for BPM Suite 11g advanced training by David Read Munich, Germany Jürgen Kress Tech 16-18.07.2012 WebCenter Sites 11g advanced training by Product Management Munich, Germany Jürgen Kress Tech 17-20.07.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 23-26.07.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 29-31.08.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 02-05.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.10.2012 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 28-30.11.2012 Oracle AIA 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 11-14.12.2012 Oracle BPM 11g Implementation Bootcamp Live Virtual Class Oracle University Tech 20-22.2.2013 Oracle AIA 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 14-17.1.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech 15-18.3.2013 Oracle BPM 11g Implementation Bootcamp Utrecht, Netherlands Oracle University Tech Please check your local OPN Training Calendar for additional training and locations here. Back to top SOASCHOOL.COM - SOA CERTIFIED PROFESSIONAL(SOACP) PROGRAM The SOASchool.com - SOA Certified Professional (SOACP) program is dedicated to excellence in the field of SOA and service-oriented computing. Through a series of seasoned course modules and exams, IT professionals have the opportunity to obtain a number of different certifications to recognize their accomplishment of gaining "project ready" SOA proficiency. This comprehensive and strictly vendor-neutral program was developed in cooperation with best-selling SOA author Thomas Erl and several major SOA organizations and academic institutions. Through the involvement of the SOA Education Committee, course contents and certification requirements are constantly reviewed and revised to stay current with developments in the service-oriented computing industry. The program is currently comprised of 12 course modules and 5 certifications and is expanding to 18 course modules and 8 certifications throughout 2009. For more information, visit www.soaschool.com and www.soacp.com. Blog Twitter LinkedIn Mix Forum Wiki Back to top YOUR CONTENT ON THE NEWSLETTER AND ON THE SOA COMMUNITY PORTAL Publishing Your StoriesWe would like to invite our partners to publish information in the newsletter or on our SOA Community portal. Especially we are looking for your real life experience with our SOA technology. Please send your documents to Jürgen Kress. We look forward to getting your suggestions! Back to top SOA DISCUSSION FORUM BECOMES INTERACTIVE AT THE SOA COMMUNITY! Do you want to chat to experts, including partners and Oracle SOA Product Development? Do you want to get the latest information about our SOA solutions and events?Attend our private online SOA Discussion Forum at OTN. Please send your OTN forums user name to Brigitte Felisaz. You must be a registered user to access the SOA Discussion Forum. Back to top INVITE YOUR COLLEAGUES TO JOIN THE SOA COMMUNITY Please feel free to invite your colleagues to join the SOA Community and to participate in the SOA Assessment tests. For registration please login the Oracle PartnerNetwork and go to: www.oracle.com/goto/emea/soa For any questions on the above or concerning SOA and Oracle in general please contact the Oracle EMEA Alliances & Channels SOA Team. Best regardsOracle EMEA SOA TeamJürgen Kress Jürgen KressSOA Partner Adoption EMEATel. +49 89 1430 1479E-Mail: [email protected]

    Read the article

  • Top things web developers should know about the Visual Studio 2013 release

    - by Jon Galloway
    ASP.NET and Web Tools for Visual Studio 2013 Release NotesASP.NET and Web Tools for Visual Studio 2013 Release NotesSummary for lazy readers: Visual Studio 2013 is now available for download on the Visual Studio site and on MSDN subscriber downloads) Visual Studio 2013 installs side by side with Visual Studio 2012 and supports round-tripping between Visual Studio versions, so you can try it out without committing to a switch Visual Studio 2013 ships with the new version of ASP.NET, which includes ASP.NET MVC 5, ASP.NET Web API 2, Razor 3, Entity Framework 6 and SignalR 2.0 The new releases ASP.NET focuses on One ASP.NET, so core features and web tools work the same across the platform (e.g. adding ASP.NET MVC controllers to a Web Forms application) New core features include new templates based on Bootstrap, a new scaffolding system, and a new identity system Visual Studio 2013 is an incredible editor for web files, including HTML, CSS, JavaScript, Markdown, LESS, Coffeescript, Handlebars, Angular, Ember, Knockdown, etc. Top links: Visual Studio 2013 content on the ASP.NET site are in the standard new releases area: http://www.asp.net/vnext ASP.NET and Web Tools for Visual Studio 2013 Release Notes Short intro videos on the new Visual Studio web editor features from Scott Hanselman and Mads Kristensen Announcing release of ASP.NET and Web Tools for Visual Studio 2013 post on the official .NET Web Development and Tools Blog Scott Guthrie's post: Announcing the Release of Visual Studio 2013 and Great Improvements to ASP.NET and Entity Framework Okay, for those of you who are still with me, let's dig in a bit. Quick web dev notes on downloading and installing Visual Studio 2013 I found Visual Studio 2013 to be a pretty fast install. According to Brian Harry's release post, installing over pre-release versions of Visual Studio is supported.  I've installed the release version over pre-release versions, and it worked fine. If you're only going to be doing web development, you can speed up the install if you just select Web Developer tools. Of course, as a good Microsoft employee, I'll mention that you might also want to install some of those other features, like the Store apps for Windows 8 and the Windows Phone 8.0 SDK, but they do download and install a lot of other stuff (e.g. the Windows Phone SDK sets up Hyper-V and downloads several GB's of VM's). So if you're planning just to do web development for now, you can pick just the Web Developer Tools and install the other stuff later. If you've got a fast internet connection, I recommend using the web installer instead of downloading the ISO. The ISO includes all the features, whereas the web installer just downloads what you're installing. Visual Studio 2013 development settings and color theme When you start up Visual Studio, it'll prompt you to pick some defaults. These are totally up to you -whatever suits your development style - and you can change them later. As I said, these are completely up to you. I recommend either the Web Development or Web Development (Code Only) settings. The only real difference is that Code Only hides the toolbars, and you can switch between them using Tools / Import and Export Settings / Reset. Web Development settings Web Development (code only) settings Usually I've just gone with Web Development (code only) in the past because I just want to focus on the code, although the Standard toolbar does make it easier to switch default web browsers. More on that later. Color theme Sigh. Okay, everyone's got their favorite colors. I alternate between Light and Dark depending on my mood, and I personally like how the low contrast on the window chrome in those themes puts the emphasis on my code rather than the tabs and toolbars. I know some people got pretty worked up over that, though, and wanted the blue theme back. I personally don't like it - it reminds me of ancient versions of Visual Studio that I don't want to think about anymore. So here's the thing: if you install Visual Studio Ultimate, it defaults to Blue. The other versions default to Light. If you use Blue, I won't criticize you - out loud, that is. You can change themes really easily - either Tools / Options / Environment / General, or the smart way: ctrl+q for quick launch, then type Theme and hit enter. Signing in During the first run, you'll be prompted to sign in. You don't have to - you can click the "Not now, maybe later" link at the bottom of that dialog. I recommend signing in, though. It's not hooked in with licensing or tracking the kind of code you write to sell you components. It is doing good things, like  syncing your Visual Studio settings between computers. More about that here. So, you don't have to, but I sure do. Overview of shiny new things in ASP.NET land There are a lot of good new things in ASP.NET. I'll list some of my favorite here, but you can read more on the ASP.NET site. One ASP.NET You've heard us talk about this for a while. The idea is that options are good, but choice can be a burden. When you start a new ASP.NET project, why should you have to make a tough decision - with long-term consequences - about how your application will work? If you want to use ASP.NET Web Forms, but have the option of adding in ASP.NET MVC later, why should that be hard? It's all ASP.NET, right? Ideally, you'd just decide that you want to use ASP.NET to build sites and services, and you could use the appropriate tools (the green blocks below) as you needed them. So, here it is. When you create a new ASP.NET application, you just create an ASP.NET application. Next, you can pick from some templates to get you started... but these are different. They're not "painful decision" templates, they're just some starting pieces. And, most importantly, you can mix and match. I can pick a "mostly" Web Forms template, but include MVC and Web API folders and core references. If you've tried to mix and match in the past, you're probably aware that it was possible, but not pleasant. ASP.NET MVC project files contained special project type GUIDs, so you'd only get controller scaffolding support in a Web Forms project if you manually edited the csproj file. Features in one stack didn't work in others. Project templates were painful choices. That's no longer the case. Hooray! I just did a demo in a presentation last week where I created a new Web Forms + MVC + Web API site, built a model, scaffolded MVC and Web API controllers with EF Code First, add data in the MVC view, viewed it in Web API, then added a GridView to the Web Forms Default.aspx page and bound it to the Model. In about 5 minutes. Sure, it's a simple example, but it's great to be able to share code and features across the whole ASP.NET family. Authentication In the past, authentication was built into the templates. So, for instance, there was an ASP.NET MVC 4 Intranet Project template which created a new ASP.NET MVC 4 application that was preconfigured for Windows Authentication. All of that authentication stuff was built into each template, so they varied between the stacks, and you couldn't reuse them. You didn't see a lot of changes to the authentication options, since they required big changes to a bunch of project templates. Now, the new project dialog includes a common authentication experience. When you hit the Change Authentication button, you get some common options that work the same way regardless of the template or reference settings you've made. These options work on all ASP.NET frameworks, and all hosting environments (IIS, IIS Express, or OWIN for self-host) The default is Individual User Accounts: This is the standard "create a local account, using username / password or OAuth" thing; however, it's all built on the new Identity system. More on that in a second. The one setting that has some configuration to it is Organizational Accounts, which lets you configure authentication using Active Directory, Windows Azure Active Directory, or Office 365. Identity There's a new identity system. We've taken the best parts of the previous ASP.NET Membership and Simple Identity systems, rolled in a lot of feedback and made big enhancements to support important developer concerns like unit testing and extensiblity. I've written long posts about ASP.NET identity, and I'll do it again. Soon. This is not that post. The short version is that I think we've finally got just the right Identity system. Some of my favorite features: There are simple, sensible defaults that work well - you can File / New / Run / Register / Login, and everything works. It supports standard username / password as well as external authentication (OAuth, etc.). It's easy to customize without having to re-implement an entire provider. It's built using pluggable pieces, rather than one large monolithic system. It's built using interfaces like IUser and IRole that allow for unit testing, dependency injection, etc. You can easily add user profile data (e.g. URL, twitter handle, birthday). You just add properties to your ApplicationUser model and they'll automatically be persisted. Complete control over how the identity data is persisted. By default, everything works with Entity Framework Code First, but it's built to support changes from small (modify the schema) to big (use another ORM, store your data in a document database or in the cloud or in XML or in the EXIF data of your desktop background or whatever). It's configured via OWIN. More on OWIN and Katana later, but the fact that it's built using OWIN means it's portable. You can find out more in the Authentication and Identity section of the ASP.NET site (and lots more content will be going up there soon). New Bootstrap based project templates The new project templates are built using Bootstrap 3. Bootstrap (formerly Twitter Bootstrap) is a front-end framework that brings a lot of nice benefits: It's responsive, so your projects will automatically scale to device width using CSS media queries. For example, menus are full size on a desktop browser, but on narrower screens you automatically get a mobile-friendly menu. The built-in Bootstrap styles make your standard page elements (headers, footers, buttons, form inputs, tables etc.) look nice and modern. Bootstrap is themeable, so you can reskin your whole site by dropping in a new Bootstrap theme. Since Bootstrap is pretty popular across the web development community, this gives you a large and rapidly growing variety of templates (free and paid) to choose from. Bootstrap also includes a lot of very useful things: components (like progress bars and badges), useful glyphicons, and some jQuery plugins for tooltips, dropdowns, carousels, etc.). Here's a look at how the responsive part works. When the page is full screen, the menu and header are optimized for a wide screen display: When I shrink the page down (this is all based on page width, not useragent sniffing) the menu turns into a nice mobile-friendly dropdown: For a quick example, I grabbed a new free theme off bootswatch.com. For simple themes, you just need to download the boostrap.css file and replace the /content/bootstrap.css file in your project. Now when I refresh the page, I've got a new theme: Scaffolding The big change in scaffolding is that it's one system that works across ASP.NET. You can create a new Empty Web project or Web Forms project and you'll get the Scaffold context menus. For release, we've got MVC 5 and Web API 2 controllers. We had a preview of Web Forms scaffolding in the preview releases, but they weren't fully baked for RTM. Look for them in a future update, expected pretty soon. This scaffolding system wasn't just changed to work across the ASP.NET frameworks, it's also built to enable future extensibility. That's not in this release, but should also hopefully be out soon. Project Readme page This is a small thing, but I really like it. When you create a new project, you get a Project_Readme.html page that's added to the root of your project and opens in the Visual Studio built-in browser. I love it. A long time ago, when you created a new project we just dumped it on you and left you scratching your head about what to do next. Not ideal. Then we started adding a bunch of Getting Started information to the new project templates. That told you what to do next, but you had to delete all of that stuff out of your website. It doesn't belong there. Not ideal. This is a simple HTML file that's not integrated into your project code at all. You can delete it if you want. But, it shows a lot of helpful links that are current for the project you just created. In the future, if we add new wacky project types, they can create readme docs with specific information on how to do appropriately wacky things. Side note: I really like that they used the internal browser in Visual Studio to show this content rather than popping open an HTML page in the default browser. I hate that. It's annoying. If you're doing that, I hope you'll stop. What if some unnamed person has 40 or 90 tabs saved in their browser session? When you pop open your "Thanks for installing my Visual Studio extension!" page, all eleventy billion tabs start up and I wish I'd never installed your thing. Be like these guys and pop stuff Visual Studio specific HTML docs in the Visual Studio browser. ASP.NET MVC 5 The biggest change with ASP.NET MVC 5 is that it's no longer a separate project type. It integrates well with the rest of ASP.NET. In addition to that and the other common features we've already looked at (Bootstrap templates, Identity, authentication), here's what's new for ASP.NET MVC. Attribute routing ASP.NET MVC now supports attribute routing, thanks to a contribution by Tim McCall, the author of http://attributerouting.net. With attribute routing you can specify your routes by annotating your actions and controllers. This supports some pretty complex, customized routing scenarios, and it allows you to keep your route information right with your controller actions if you'd like. Here's a controller that includes an action whose method name is Hiding, but I've used AttributeRouting to configure it to /spaghetti/with-nesting/where-is-waldo public class SampleController : Controller { [Route("spaghetti/with-nesting/where-is-waldo")] public string Hiding() { return "You found me!"; } } I enable that in my RouteConfig.cs, and I can use that in conjunction with my other MVC routes like this: public class RouteConfig { public static void RegisterRoutes(RouteCollection routes) { routes.IgnoreRoute("{resource}.axd/{*pathInfo}"); routes.MapMvcAttributeRoutes(); routes.MapRoute( name: "Default", url: "{controller}/{action}/{id}", defaults: new { controller = "Home", action = "Index", id = UrlParameter.Optional } ); } } You can read more about Attribute Routing in ASP.NET MVC 5 here. Filter enhancements There are two new additions to filters: Authentication Filters and Filter Overrides. Authentication filters are a new kind of filter in ASP.NET MVC that run prior to authorization filters in the ASP.NET MVC pipeline and allow you to specify authentication logic per-action, per-controller, or globally for all controllers. Authentication filters process credentials in the request and provide a corresponding principal. Authentication filters can also add authentication challenges in response to unauthorized requests. Override filters let you change which filters apply to a given action method or controller. Override filters specify a set of filter types that should not be run for a given scope (action or controller). This allows you to configure filters that apply globally but then exclude certain global filters from applying to specific actions or controllers. ASP.NET Web API 2 ASP.NET Web API 2 includes a lot of new features. Attribute Routing ASP.NET Web API supports the same attribute routing system that's in ASP.NET MVC 5. You can read more about the Attribute Routing features in Web API in this article. OAuth 2.0 ASP.NET Web API picks up OAuth 2.0 support, using security middleware running on OWIN (discussed below). This is great for features like authenticated Single Page Applications. OData Improvements ASP.NET Web API now has full OData support. That required adding in some of the most powerful operators: $select, $expand, $batch and $value. You can read more about OData operator support in this article by Mike Wasson. Lots more There's a huge list of other features, including CORS (cross-origin request sharing), IHttpActionResult, IHttpRequestContext, and more. I think the best overview is in the release notes. OWIN and Katana I've written about OWIN and Katana recently. I'm a big fan. OWIN is the Open Web Interfaces for .NET. It's a spec, like HTML or HTTP, so you can't install OWIN. The benefit of OWIN is that it's a community specification, so anyone who implements it can plug into the ASP.NET stack, either as middleware or as a host. Katana is the Microsoft implementation of OWIN. It leverages OWIN to wire up things like authentication, handlers, modules, IIS hosting, etc., so ASP.NET can host OWIN components and Katana components can run in someone else's OWIN implementation. Howard Dierking just wrote a cool article in MSDN magazine describing Katana in depth: Getting Started with the Katana Project. He had an interesting example showing an OWIN based pipeline which leveraged SignalR, ASP.NET Web API and NancyFx components in the same stack. If this kind of thing makes sense to you, that's great. If it doesn't, don't worry, but keep an eye on it. You're going to see some cool things happen as a result of ASP.NET becoming more and more pluggable. Visual Studio Web Tools Okay, this stuff's just crazy. Visual Studio has been adding some nice web dev features over the past few years, but they've really cranked it up for this release. Visual Studio is by far my favorite code editor for all web files: CSS, HTML, JavaScript, and lots of popular libraries. Stop thinking of Visual Studio as a big editor that you only use to write back-end code. Stop editing HTML and CSS in Notepad (or Sublime, Notepad++, etc.). Visual Studio starts up in under 2 seconds on a modern computer with an SSD. Misspelling HTML attributes or your CSS classes or jQuery or Angular syntax is stupid. It doesn't make you a better developer, it makes you a silly person who wastes time. Browser Link Browser Link is a real-time, two-way connection between Visual Studio and all connected browsers. It's only attached when you're running locally, in debug, but it applies to any and all connected browser, including emulators. You may have seen demos that showed the browsers refreshing based on changes in the editor, and I'll agree that's pretty cool. But it's really just the start. It's a two-way connection, and it's built for extensiblity. That means you can write extensions that push information from your running application (in IE, Chrome, a mobile emulator, etc.) back to Visual Studio. Mads and team have showed off some demonstrations where they enabled edit mode in the browser which updated the source HTML back on the browser. It's also possible to look at how the rendered HTML performs, check for compatibility issues, watch for unused CSS classes, the sky's the limit. New HTML editor The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Here's a 3 minute tour from Mads Kristensen. The previous HTML editor had a lot of old code that didn't allow for improvements. The team rewrote the HTML editor to take advantage of the new(ish) extensibility features in Visual Studio, which then allowed them to add in all kinds of features - things like CSS Class and ID IntelliSense (so you type style="" and get a list of classes and ID's for your project), smart indent based on how your document is formatted, JavaScript reference auto-sync, etc. Lots more Visual Studio web dev features That's just a sampling - there's a ton of great features for JavaScript editing, CSS editing, publishing, and Page Inspector (which shows real-time rendering of your page inside Visual Studio). Here are some more short videos showing those features. Lots, lots more Okay, that's just a summary, and it's still quite a bit. Head on over to http://asp.net/vnext for more information, and download Visual Studio 2013 now to get started!

    Read the article

  • Using BPEL Performance Statistics to Diagnose Performance Bottlenecks

    - by fip
    Tuning performance of Oracle SOA 11G applications could be challenging. Because SOA is a platform for you to build composite applications that connect many applications and "services", when the overall performance is slow, the bottlenecks could be anywhere in the system: the applications/services that SOA connects to, the infrastructure database, or the SOA server itself.How to quickly identify the bottleneck becomes crucial in tuning the overall performance. Fortunately, the BPEL engine in Oracle SOA 11G (and 10G, for that matter) collects BPEL Engine Performance Statistics, which show the latencies of low level BPEL engine activities. The BPEL engine performance statistics can make it a bit easier for you to identify the performance bottleneck. Although the BPEL engine performance statistics are always available, the access to and interpretation of them are somewhat obscure in the early and current (PS5) 11G versions. This blog attempts to offer instructions that help you to enable, retrieve and interpret the performance statistics, before the future versions provides a more pleasant user experience. Overview of BPEL Engine Performance Statistics  SOA BPEL has a feature of collecting some performance statistics and store them in memory. One MBean attribute, StatLastN, configures the size of the memory buffer to store the statistics. This memory buffer is a "moving window", in a way that old statistics will be flushed out by the new if the amount of data exceeds the buffer size. Since the buffer size is limited by StatLastN, impacts of statistics collection on performance is minimal. By default StatLastN=-1, which means no collection of performance data. Once the statistics are collected in the memory buffer, they can be retrieved via another MBean oracle.as.soainfra.bpel:Location=[Server Name],name=BPELEngine,type=BPELEngine.> My friend in Oracle SOA development wrote this simple 'bpelstat' web app that looks up and retrieves the performance data from the MBean and displays it in a human readable form. It does not have beautiful UI but it is fairly useful. Although in Oracle SOA 11.1.1.5 onwards the same statistics can be viewed via a more elegant UI under "request break down" at EM -> SOA Infrastructure -> Service Engines -> BPEL -> Statistics, some unsophisticated minds like mine may still prefer the simplicity of the 'bpelstat' JSP. One thing that simple JSP does do well is that you can save the page and send it to someone to further analyze Follows are the instructions of how to install and invoke the BPEL statistic JSP. My friend in SOA Development will soon blog about interpreting the statistics. Stay tuned. Step1: Enable BPEL Engine Statistics for Each SOA Servers via Enterprise Manager First st you need to set the StatLastN to some number as a way to enable the collection of BPEL Engine Performance Statistics EM Console -> soa-infra(Server Name) -> SOA Infrastructure -> SOA Administration -> BPEL Properties Click on "More BPEL Configuration Properties" Click on attribute "StatLastN", set its value to some integer number. Typically you want to set it 1000 or more. Step 2: Download and Deploy bpelstat.war File to Admin Server, Note: the WAR file contains a JSP that does NOT have any security restriction. You do NOT want to keep in your production server for a long time as it is a security hazard. Deactivate the war once you are done. Download the bpelstat.war to your local PC At WebLogic Console, Go to Deployments -> Install Click on the "upload your file(s)" Click the "Browse" button to upload the deployment to Admin Server Accept the uploaded file as the path, click next Check the default option "Install this deployment as an application" Check "AdminServer" as the target server Finish the rest of the deployment with default settings Console -> Deployments Check the box next to "bpelstat" application Click on the "Start" button. It will change the state of the app from "prepared" to "active" Step 3: Invoke the BPEL Statistic Tool The BPELStat tool merely call the MBean of BPEL server and collects and display the in-memory performance statics. You usually want to do that after some peak loads. Go to http://<admin-server-host>:<admin-server-port>/bpelstat Enter the correct admin hostname, port, username and password Enter the SOA Server Name from which you want to collect the performance statistics. For example, SOA_MS1, etc. Click Submit Keep doing the same for all SOA servers. Step 3: Interpret the BPEL Engine Statistics You will see a few categories of BPEL Statistics from the JSP Page. First it starts with the overall latency of BPEL processes, grouped by synchronous and asynchronous processes. Then it provides the further break down of the measurements through the life time of a BPEL request, which is called the "request break down". 1. Overall latency of BPEL processes The top of the page shows that the elapse time of executing the synchronous process TestSyncBPELProcess from the composite TestComposite averages at about 1543.21ms, while the elapse time of executing the asynchronous process TestAsyncBPELProcess from the composite TestComposite2 averages at about 1765.43ms. The maximum and minimum latency were also shown. Synchronous process statistics <statistics>     <stats key="default/TestComposite!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestSyncBPELProcess" min="1234" max="4567" average="1543.21" count="1000">     </stats> </statistics> Asynchronous process statistics <statistics>     <stats key="default/TestComposite2!2.0.2-ScopedJMSOSB*soa_bfba2527-a9ba-41a7-95c5-87e49c32f4ff/TestAsyncBPELProcess" min="2234" max="3234" average="1765.43" count="1000">     </stats> </statistics> 2. Request break down Under the overall latency categorized by synchronous and asynchronous processes is the "Request breakdown". Organized by statistic keys, the Request breakdown gives finer grain performance statistics through the life time of the BPEL requests.It uses indention to show the hierarchy of the statistics. Request breakdown <statistics>     <stats key="eng-composite-request" min="0" max="0" average="0.0" count="0">         <stats key="eng-single-request" min="22" max="606" average="258.43" count="277">             <stats key="populate-context" min="0" max="0" average="0.0" count="248"> Please note that in SOA 11.1.1.6, the statistics under Request breakdown is aggregated together cross all the BPEL processes based on statistic keys. It does not differentiate between BPEL processes. If two BPEL processes happen to have the statistic that share same statistic key, the statistics from two BPEL processes will be aggregated together. Keep this in mind when we go through more details below. 2.1 BPEL process activity latencies A very useful measurement in the Request Breakdown is the performance statistics of the BPEL activities you put in your BPEL processes: Assign, Invoke, Receive, etc. The names of the measurement in the JSP page directly come from the names to assign to each BPEL activity. These measurements are under the statistic key "actual-perform" Example 1:  Follows is the measurement for BPEL activity "AssignInvokeCreditProvider_Input", which looks like the Assign activity in a BPEL process that assign an input variable before passing it to the invocation:                                <stats key="AssignInvokeCreditProvider_Input" min="1" max="8" average="1.9" count="153">                                     <stats key="sensor-send-activity-data" min="0" max="1" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                 </stats> Note: because as previously mentioned that the statistics cross all BPEL processes are aggregated together based on statistic keys, if two BPEL processes happen to name their Invoke activity the same name, they will show up at one measurement (i.e. statistic key). Example 2: Follows is the measurement of BPEL activity called "InvokeCreditProvider". You can not only see that by average it takes 3.31ms to finish this call (pretty fast) but also you can see from the further break down that most of this 3.31 ms was spent on the "invoke-service".                                  <stats key="InvokeCreditProvider" min="1" max="13" average="3.31" count="153">                                     <stats key="initiate-correlation-set-again" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="invoke-service" min="1" max="13" average="3.08" count="153">                                         <stats key="prep-call" min="0" max="1" average="0.04" count="153">                                         </stats>                                     </stats>                                     <stats key="initiate-correlation-set" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="sensor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="sensor-send-variable-data" min="0" max="0" average="0.0" count="153">                                     </stats>                                     <stats key="monitor-send-activity-data" min="0" max="0" average="0.0" count="306">                                     </stats>                                     <stats key="update-audit-trail" min="0" max="2" average="0.03" count="153">                                     </stats>                                 </stats> 2.2 BPEL engine activity latency Another type of measurements under Request breakdown are the latencies of underlying system level engine activities. These activities are not directly tied to a particular BPEL process or process activity, but they are critical factors in the overall engine performance. These activities include the latency of saving asynchronous requests to database, and latency of process dehydration. My friend Malkit Bhasin is working on providing more information on interpreting the statistics on engine activities on his blog (https://blogs.oracle.com/malkit/). I will update this blog once the information becomes available. Update on 2012-10-02: My friend Malkit Bhasin has published the detail interpretation of the BPEL service engine statistics at his blog http://malkit.blogspot.com/2012/09/oracle-bpel-engine-soa-suite.html.

    Read the article

  • A way of doing real-world test-driven development (and some thoughts about it)

    - by Thomas Weller
    Lately, I exchanged some arguments with Derick Bailey about some details of the red-green-refactor cycle of the Test-driven development process. In short, the issue revolved around the fact that it’s not enough to have a test red or green, but it’s also important to have it red or green for the right reasons. While for me, it’s sufficient to initially have a NotImplementedException in place, Derick argues that this is not totally correct (see these two posts: Red/Green/Refactor, For The Right Reasons and Red For The Right Reason: Fail By Assertion, Not By Anything Else). And he’s right. But on the other hand, I had no idea how his insights could have any practical consequence for my own individual interpretation of the red-green-refactor cycle (which is not really red-green-refactor, at least not in its pure sense, see the rest of this article). This made me think deeply for some days now. In the end I found out that the ‘right reason’ changes in my understanding depending on what development phase I’m in. To make this clear (at least I hope it becomes clear…) I started to describe my way of working in some detail, and then something strange happened: The scope of the article slightly shifted from focusing ‘only’ on the ‘right reason’ issue to something more general, which you might describe as something like  'Doing real-world TDD in .NET , with massive use of third-party add-ins’. This is because I feel that there is a more general statement about Test-driven development to make:  It’s high time to speak about the ‘How’ of TDD, not always only the ‘Why’. Much has been said about this, and me myself also contributed to that (see here: TDD is not about testing, it's about how we develop software). But always justifying what you do is very unsatisfying in the long run, it is inherently defensive, and it costs time and effort that could be used for better and more important things. And frankly: I’m somewhat sick and tired of repeating time and again that the test-driven way of software development is highly preferable for many reasons - I don’t want to spent my time exclusively on stating the obvious… So, again, let’s say it clearly: TDD is programming, and programming is TDD. Other ways of programming (code-first, sometimes called cowboy-coding) are exceptional and need justification. – I know that there are many people out there who will disagree with this radical statement, and I also know that it’s not a description of the real world but more of a mission statement or something. But nevertheless I’m absolutely sure that in some years this statement will be nothing but a platitude. Side note: Some parts of this post read as if I were paid by Jetbrains (the manufacturer of the ReSharper add-in – R#), but I swear I’m not. Rather I think that Visual Studio is just not production-complete without it, and I wouldn’t even consider to do professional work without having this add-in installed... The three parts of a software component Before I go into some details, I first should describe my understanding of what belongs to a software component (assembly, type, or method) during the production process (i.e. the coding phase). Roughly, I come up with the three parts shown below:   First, we need to have some initial sort of requirement. This can be a multi-page formal document, a vague idea in some programmer’s brain of what might be needed, or anything in between. In either way, there has to be some sort of requirement, be it explicit or not. – At the C# micro-level, the best way that I found to formulate that is to define interfaces for just about everything, even for internal classes, and to provide them with exhaustive xml comments. The next step then is to re-formulate these requirements in an executable form. This is specific to the respective programming language. - For C#/.NET, the Gallio framework (which includes MbUnit) in conjunction with the ReSharper add-in for Visual Studio is my toolset of choice. The third part then finally is the production code itself. It’s development is entirely driven by the requirements and their executable formulation. This is the delivery, the two other parts are ‘only’ there to make its production possible, to give it a decent quality and reliability, and to significantly reduce related costs down the maintenance timeline. So while the first two parts are not really relevant for the customer, they are very important for the developer. The customer (or in Scrum terms: the Product Owner) is not interested at all in how  the product is developed, he is only interested in the fact that it is developed as cost-effective as possible, and that it meets his functional and non-functional requirements. The rest is solely a matter of the developer’s craftsmanship, and this is what I want to talk about during the remainder of this article… An example To demonstrate my way of doing real-world TDD, I decided to show the development of a (very) simple Calculator component. The example is deliberately trivial and silly, as examples always are. I am totally aware of the fact that real life is never that simple, but I only want to show some development principles here… The requirement As already said above, I start with writing down some words on the initial requirement, and I normally use interfaces for that, even for internal classes - the typical question “intf or not” doesn’t even come to mind. I need them for my usual workflow and using them automatically produces high componentized and testable code anyway. To think about their usage in every single situation would slow down the production process unnecessarily. So this is what I begin with: namespace Calculator {     /// <summary>     /// Defines a very simple calculator component for demo purposes.     /// </summary>     public interface ICalculator     {         /// <summary>         /// Gets the result of the last successful operation.         /// </summary>         /// <value>The last result.</value>         /// <remarks>         /// Will be <see langword="null" /> before the first successful operation.         /// </remarks>         double? LastResult { get; }       } // interface ICalculator   } // namespace Calculator So, I’m not beginning with a test, but with a sort of code declaration - and still I insist on being 100% test-driven. There are three important things here: Starting this way gives me a method signature, which allows to use IntelliSense and AutoCompletion and thus eliminates the danger of typos - one of the most regular, annoying, time-consuming, and therefore expensive sources of error in the development process. In my understanding, the interface definition as a whole is more of a readable requirement document and technical documentation than anything else. So this is at least as much about documentation than about coding. The documentation must completely describe the behavior of the documented element. I normally use an IoC container or some sort of self-written provider-like model in my architecture. In either case, I need my components defined via service interfaces anyway. - I will use the LinFu IoC framework here, for no other reason as that is is very simple to use. The ‘Red’ (pt. 1)   First I create a folder for the project’s third-party libraries and put the LinFu.Core dll there. Then I set up a test project (via a Gallio project template), and add references to the Calculator project and the LinFu dll. Finally I’m ready to write the first test, which will look like the following: namespace Calculator.Test {     [TestFixture]     public class CalculatorTest     {         private readonly ServiceContainer container = new ServiceContainer();           [Test]         public void CalculatorLastResultIsInitiallyNull()         {             ICalculator calculator = container.GetService<ICalculator>();               Assert.IsNull(calculator.LastResult);         }       } // class CalculatorTest   } // namespace Calculator.Test       This is basically the executable formulation of what the interface definition states (part of). Side note: There’s one principle of TDD that is just plain wrong in my eyes: I’m talking about the Red is 'does not compile' thing. How could a compiler error ever be interpreted as a valid test outcome? I never understood that, it just makes no sense to me. (Or, in Derick’s terms: this reason is as wrong as a reason ever could be…) A compiler error tells me: Your code is incorrect, but nothing more.  Instead, the ‘Red’ part of the red-green-refactor cycle has a clearly defined meaning to me: It means that the test works as intended and fails only if its assumptions are not met for some reason. Back to our Calculator. When I execute the above test with R#, the Gallio plugin will give me this output: So this tells me that the test is red for the wrong reason: There’s no implementation that the IoC-container could load, of course. So let’s fix that. With R#, this is very easy: First, create an ICalculator - derived type:        Next, implement the interface members: And finally, move the new class to its own file: So far my ‘work’ was six mouse clicks long, the only thing that’s left to do manually here, is to add the Ioc-specific wiring-declaration and also to make the respective class non-public, which I regularly do to force my components to communicate exclusively via interfaces: This is what my Calculator class looks like as of now: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult         {             get             {                 throw new NotImplementedException();             }         }     } } Back to the test fixture, we have to put our IoC container to work: [TestFixture] public class CalculatorTest {     #region Fields       private readonly ServiceContainer container = new ServiceContainer();       #endregion // Fields       #region Setup/TearDown       [FixtureSetUp]     public void FixtureSetUp()     {        container.LoadFrom(AppDomain.CurrentDomain.BaseDirectory, "Calculator.dll");     }       ... Because I have a R# live template defined for the setup/teardown method skeleton as well, the only manual coding here again is the IoC-specific stuff: two lines, not more… The ‘Red’ (pt. 2) Now, the execution of the above test gives the following result: This time, the test outcome tells me that the method under test is called. And this is the point, where Derick and I seem to have somewhat different views on the subject: Of course, the test still is worthless regarding the red/green outcome (or: it’s still red for the wrong reasons, in that it gives a false negative). But as far as I am concerned, I’m not really interested in the test outcome at this point of the red-green-refactor cycle. Rather, I only want to assert that my test actually calls the right method. If that’s the case, I will happily go on to the ‘Green’ part… The ‘Green’ Making the test green is quite trivial. Just make LastResult an automatic property:     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         public double? LastResult { get; private set; }     }         One more round… Now on to something slightly more demanding (cough…). Let’s state that our Calculator exposes an Add() method:         ...   /// <summary>         /// Adds the specified operands.         /// </summary>         /// <param name="operand1">The operand1.</param>         /// <param name="operand2">The operand2.</param>         /// <returns>The result of the additon.</returns>         /// <exception cref="ArgumentException">         /// Argument <paramref name="operand1"/> is &lt; 0.<br/>         /// -- or --<br/>         /// Argument <paramref name="operand2"/> is &lt; 0.         /// </exception>         double Add(double operand1, double operand2);       } // interface ICalculator A remark: I sometimes hear the complaint that xml comment stuff like the above is hard to read. That’s certainly true, but irrelevant to me, because I read xml code comments with the CR_Documentor tool window. And using that, it looks like this:   Apart from that, I’m heavily using xml code comments (see e.g. here for a detailed guide) because there is the possibility of automating help generation with nightly CI builds (using MS Sandcastle and the Sandcastle Help File Builder), and then publishing the results to some intranet location.  This way, a team always has first class, up-to-date technical documentation at hand about the current codebase. (And, also very important for speeding up things and avoiding typos: You have IntelliSense/AutoCompletion and R# support, and the comments are subject to compiler checking…).     Back to our Calculator again: Two more R# – clicks implement the Add() skeleton:         ...           public double Add(double operand1, double operand2)         {             throw new NotImplementedException();         }       } // class Calculator As we have stated in the interface definition (which actually serves as our requirement document!), the operands are not allowed to be negative. So let’s start implementing that. Here’s the test: [Test] [Row(-0.5, 2)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); } As you can see, I’m using a data-driven unit test method here, mainly for these two reasons: Because I know that I will have to do the same test for the second operand in a few seconds, I save myself from implementing another test method for this purpose. Rather, I only will have to add another Row attribute to the existing one. From the test report below, you can see that the argument values are explicitly printed out. This can be a valuable documentation feature even when everything is green: One can quickly review what values were tested exactly - the complete Gallio HTML-report (as it will be produced by the Continuous Integration runs) shows these values in a quite clear format (see below for an example). Back to our Calculator development again, this is what the test result tells us at the moment: So we’re red again, because there is not yet an implementation… Next we go on and implement the necessary parameter verification to become green again, and then we do the same thing for the second operand. To make a long story short, here’s the test and the method implementation at the end of the second cycle: // in CalculatorTest:   [Test] [Row(-0.5, 2)] [Row(295, -123)] public void AddThrowsOnNegativeOperands(double operand1, double operand2) {     ICalculator calculator = container.GetService<ICalculator>();       Assert.Throws<ArgumentException>(() => calculator.Add(operand1, operand2)); }   // in Calculator: public double Add(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }     if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }     throw new NotImplementedException(); } So far, we have sheltered our method from unwanted input, and now we can safely operate on the parameters without further caring about their validity (this is my interpretation of the Fail Fast principle, which is regarded here in more detail). Now we can think about the method’s successful outcomes. First let’s write another test for that: [Test] [Row(1, 1, 2)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } Again, I’m regularly using row based test methods for these kinds of unit tests. The above shown pattern proved to be extremely helpful for my development work, I call it the Defined-Input/Expected-Output test idiom: You define your input arguments together with the expected method result. There are two major benefits from that way of testing: In the course of refining a method, it’s very likely to come up with additional test cases. In our case, we might add tests for some edge cases like ‘one of the operands is zero’ or ‘the sum of the two operands causes an overflow’, or maybe there’s an external test protocol that has to be fulfilled (e.g. an ISO norm for medical software), and this results in the need of testing against additional values. In all these scenarios we only have to add another Row attribute to the test. Remember that the argument values are written to the test report, so as a side-effect this produces valuable documentation. (This can become especially important if the fulfillment of some sort of external requirements has to be proven). So your test method might look something like that in the end: [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 2)] [Row(0, 999999999, 999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, double.MaxValue)] [Row(4, double.MaxValue - 2.5, double.MaxValue)] public void TestAdd(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Add(operand1, operand2);       Assert.AreEqual(expectedResult, result); } And this will produce the following HTML report (with Gallio):   Not bad for the amount of work we invested in it, huh? - There might be scenarios where reports like that can be useful for demonstration purposes during a Scrum sprint review… The last requirement to fulfill is that the LastResult property is expected to store the result of the last operation. I don’t show this here, it’s trivial enough and brings nothing new… And finally: Refactor (for the right reasons) To demonstrate my way of going through the refactoring portion of the red-green-refactor cycle, I added another method to our Calculator component, namely Subtract(). Here’s the code (tests and production): // CalculatorTest.cs:   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtract(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       double result = calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, result); }   [Test, Description("Arguments: operand1, operand2, expectedResult")] [Row(1, 1, 0)] [Row(0, 999999999, -999999999)] [Row(0, 0, 0)] [Row(0, double.MaxValue, -double.MaxValue)] [Row(4, double.MaxValue - 2.5, -double.MaxValue)] public void TestSubtractGivesExpectedLastResult(double operand1, double operand2, double expectedResult) {     ICalculator calculator = container.GetService<ICalculator>();       calculator.Subtract(operand1, operand2);       Assert.AreEqual(expectedResult, calculator.LastResult); }   ...   // ICalculator.cs: /// <summary> /// Subtracts the specified operands. /// </summary> /// <param name="operand1">The operand1.</param> /// <param name="operand2">The operand2.</param> /// <returns>The result of the subtraction.</returns> /// <exception cref="ArgumentException"> /// Argument <paramref name="operand1"/> is &lt; 0.<br/> /// -- or --<br/> /// Argument <paramref name="operand2"/> is &lt; 0. /// </exception> double Subtract(double operand1, double operand2);   ...   // Calculator.cs:   public double Subtract(double operand1, double operand2) {     if (operand1 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand1");     }       if (operand2 < 0.0)     {         throw new ArgumentException("Value must not be negative.", "operand2");     }       return (this.LastResult = operand1 - operand2).Value; }   Obviously, the argument validation stuff that was produced during the red-green part of our cycle duplicates the code from the previous Add() method. So, to avoid code duplication and minimize the number of code lines of the production code, we do an Extract Method refactoring. One more time, this is only a matter of a few mouse clicks (and giving the new method a name) with R#: Having done that, our production code finally looks like that: using System; using LinFu.IoC.Configuration;   namespace Calculator {     [Implements(typeof(ICalculator))]     internal class Calculator : ICalculator     {         #region ICalculator           public double? LastResult { get; private set; }           public double Add(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 + operand2).Value;         }           public double Subtract(double operand1, double operand2)         {             ThrowIfOneOperandIsInvalid(operand1, operand2);               return (this.LastResult = operand1 - operand2).Value;         }           #endregion // ICalculator           #region Implementation (Helper)           private static void ThrowIfOneOperandIsInvalid(double operand1, double operand2)         {             if (operand1 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand1");             }               if (operand2 < 0.0)             {                 throw new ArgumentException("Value must not be negative.", "operand2");             }         }           #endregion // Implementation (Helper)       } // class Calculator   } // namespace Calculator But is the above worth the effort at all? It’s obviously trivial and not very impressive. All our tests were green (for the right reasons), and refactoring the code did not change anything. It’s not immediately clear how this refactoring work adds value to the project. Derick puts it like this: STOP! Hold on a second… before you go any further and before you even think about refactoring what you just wrote to make your test pass, you need to understand something: if your done with your requirements after making the test green, you are not required to refactor the code. I know… I’m speaking heresy, here. Toss me to the wolves, I’ve gone over to the dark side! Seriously, though… if your test is passing for the right reasons, and you do not need to write any test or any more code for you class at this point, what value does refactoring add? Derick immediately answers his own question: So why should you follow the refactor portion of red/green/refactor? When you have added code that makes the system less readable, less understandable, less expressive of the domain or concern’s intentions, less architecturally sound, less DRY, etc, then you should refactor it. I couldn’t state it more precise. From my personal perspective, I’d add the following: You have to keep in mind that real-world software systems are usually quite large and there are dozens or even hundreds of occasions where micro-refactorings like the above can be applied. It’s the sum of them all that counts. And to have a good overall quality of the system (e.g. in terms of the Code Duplication Percentage metric) you have to be pedantic on the individual, seemingly trivial cases. My job regularly requires the reading and understanding of ‘foreign’ code. So code quality/readability really makes a HUGE difference for me – sometimes it can be even the difference between project success and failure… Conclusions The above described development process emerged over the years, and there were mainly two things that guided its evolution (you might call it eternal principles, personal beliefs, or anything in between): Test-driven development is the normal, natural way of writing software, code-first is exceptional. So ‘doing TDD or not’ is not a question. And good, stable code can only reliably be produced by doing TDD (yes, I know: many will strongly disagree here again, but I’ve never seen high-quality code – and high-quality code is code that stood the test of time and causes low maintenance costs – that was produced code-first…) It’s the production code that pays our bills in the end. (Though I have seen customers these days who demand an acceptance test battery as part of the final delivery. Things seem to go into the right direction…). The test code serves ‘only’ to make the production code work. But it’s the number of delivered features which solely counts at the end of the day - no matter how much test code you wrote or how good it is. With these two things in mind, I tried to optimize my coding process for coding speed – or, in business terms: productivity - without sacrificing the principles of TDD (more than I’d do either way…).  As a result, I consider a ratio of about 3-5/1 for test code vs. production code as normal and desirable. In other words: roughly 60-80% of my code is test code (This might sound heavy, but that is mainly due to the fact that software development standards only begin to evolve. The entire software development profession is very young, historically seen; only at the very beginning, and there are no viable standards yet. If you think about software development as a kind of casting process, where the test code is the mold and the resulting production code is the final product, then the above ratio sounds no longer extraordinary…) Although the above might look like very much unnecessary work at first sight, it’s not. With the aid of the mentioned add-ins, doing all the above is a matter of minutes, sometimes seconds (while writing this post took hours and days…). The most important thing is to have the right tools at hand. Slow developer machines or the lack of a tool or something like that - for ‘saving’ a few 100 bucks -  is just not acceptable and a very bad decision in business terms (though I quite some times have seen and heard that…). Production of high-quality products needs the usage of high-quality tools. This is a platitude that every craftsman knows… The here described round-trip will take me about five to ten minutes in my real-world development practice. I guess it’s about 30% more time compared to developing the ‘traditional’ (code-first) way. But the so manufactured ‘product’ is of much higher quality and massively reduces maintenance costs, which is by far the single biggest cost factor, as I showed in this previous post: It's the maintenance, stupid! (or: Something is rotten in developerland.). In the end, this is a highly cost-effective way of software development… But on the other hand, there clearly is a trade-off here: coding speed vs. code quality/later maintenance costs. The here described development method might be a perfect fit for the overwhelming majority of software projects, but there certainly are some scenarios where it’s not - e.g. if time-to-market is crucial for a software project. So this is a business decision in the end. It’s just that you have to know what you’re doing and what consequences this might have… Some last words First, I’d like to thank Derick Bailey again. His two aforementioned posts (which I strongly recommend for reading) inspired me to think deeply about my own personal way of doing TDD and to clarify my thoughts about it. I wouldn’t have done that without this inspiration. I really enjoy that kind of discussions… I agree with him in all respects. But I don’t know (yet?) how to bring his insights into the described production process without slowing things down. The above described method proved to be very “good enough” in my practical experience. But of course, I’m open to suggestions here… My rationale for now is: If the test is initially red during the red-green-refactor cycle, the ‘right reason’ is: it actually calls the right method, but this method is not yet operational. Later on, when the cycle is finished and the tests become part of the regular, automated Continuous Integration process, ‘red’ certainly must occur for the ‘right reason’: in this phase, ‘red’ MUST mean nothing but an unfulfilled assertion - Fail By Assertion, Not By Anything Else!

    Read the article

  • Windows Azure: Import/Export Hard Drives, VM ACLs, Web Sockets, Remote Debugging, Continuous Delivery, New Relic, Billing Alerts and More

    - by ScottGu
    Two weeks ago we released a giant set of improvements to Windows Azure, as well as a significant update of the Windows Azure SDK. This morning we released another massive set of enhancements to Windows Azure.  Today’s new capabilities include: Storage: Import/Export Hard Disk Drives to your Storage Accounts HDInsight: General Availability of our Hadoop Service in the cloud Virtual Machines: New VM Gallery, ACL support for VIPs Web Sites: WebSocket and Remote Debugging Support Notification Hubs: Segmented customer push notification support with tag expressions TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services Developer Analytics: New Relic support for Web Sites + Mobile Services Service Bus: Support for partitioned queues and topics Billing: New Billing Alert Service that sends emails notifications when your bill hits a threshold you define All of these improvements are now available to use immediately (note that some features are still in preview).  Below are more details about them. Storage: Import/Export Hard Disk Drives to Windows Azure I am excited to announce the preview of our new Windows Azure Import/Export Service! The Windows Azure Import/Export Service enables you to move large amounts of on-premises data into and out of your Windows Azure Storage accounts. It does this by enabling you to securely ship hard disk drives directly to our Windows Azure data centers. Once we receive the drives we’ll automatically transfer the data to or from your Windows Azure Storage account.  This enables you to import or export massive amounts of data more quickly and cost effectively (and not be constrained by available network bandwidth). Encrypted Transport Our Import/Export service provides built-in support for BitLocker disk encryption – which enables you to securely encrypt data on the hard drives before you send it, and not have to worry about it being compromised even if the disk is lost/stolen in transit (since the content on the transported hard drives is completely encrypted and you are the only one who has the key to it).  The drive preparation tool we are shipping today makes setting up bitlocker encryption on these hard drives easy. How to Import/Export your first Hard Drive of Data You can read our Getting Started Guide to learn more about how to begin using the import/export service.  You can create import and export jobs via the Windows Azure Management Portal as well as programmatically using our Server Management APIs. It is really easy to create a new import or export job using the Windows Azure Management Portal.  Simply navigate to a Windows Azure storage account, and then click the new Import/Export tab now available within it (note: if you don’t have this tab make sure to sign-up for the Import/Export preview): Then click the “Create Import Job” or “Create Export Job” commands at the bottom of it.  This will launch a wizard that easily walks you through the steps required: For more comprehensive information about Import/Export, refer to Windows Azure Storage team blog.  You can also send questions and comments to the [email protected] email address. We think you’ll find this new service makes it much easier to move data into and out of Windows Azure, and it will dramatically cut down the network bandwidth required when working on large data migration projects.  We hope you like it. HDInsight: 100% Compatible Hadoop Service in the Cloud Last week we announced the general availability release of Windows Azure HDInsight. HDInsight is a 100% compatible Hadoop service that allows you to easily provision and manage Hadoop clusters for big data processing in Windows Azure.  This release is now live in production, backed by an enterprise SLA, supported 24x7 by Microsoft Support, and is ready to use for production scenarios. HDInsight allows you to use Apache Hadoop tools, such as Pig and Hive, to process large amounts of data in Windows Azure Blob Storage. Because data is stored in Windows Azure Blob Storage, you can choose to dynamically create Hadoop clusters only when you need them, and then shut them down when they are no longer required (since you pay only for the time the Hadoop cluster instances are running this provides a super cost effective way to use them).  You can create Hadoop clusters using either the Windows Azure Management Portal (see below) or using our PowerShell and Cross Platform Command line tools: The import/export hard drive support that came out today is a perfect companion service to use with HDInsight – the combination allows you to easily ingest, process and optionally export a limitless amount of data.  We’ve also integrated HDInsight with our Business Intelligence tools, so users can leverage familiar tools like Excel in order to analyze the output of jobs.  You can find out more about how to get started with HDInsight here. Virtual Machines: VM Gallery Enhancements Today’s update of Windows Azure brings with it a new Virtual Machine gallery that you can use to create new VMs in the cloud.  You can launch the gallery by doing New->Compute->Virtual Machine->From Gallery within the Windows Azure Management Portal: The new Virtual Machine Gallery includes some nice enhancements that make it even easier to use: Search: You can now easily search and filter images using the search box in the top-right of the dialog.  For example, simply type “SQL” and we’ll filter to show those images in the gallery that contain that substring. Category Tree-view: Each month we add more built-in VM images to the gallery.  You can continue to browse these using the “All” view within the VM Gallery – or now quickly filter them using the category tree-view on the left-hand side of the dialog.  For example, by selecting “Oracle” in the tree-view you can now quickly filter to see the official Oracle supplied images. MSDN and Supported checkboxes: With today’s update we are also introducing filters that makes it easy to filter out types of images that you may not be interested in. The first checkbox is MSDN: using this filter you can exclude any image that is not part of the Windows Azure benefits for MSDN subscribers (which have highly discounted pricing - you can learn more about the MSDN pricing here). The second checkbox is Supported: this filter will exclude any image that contains prerelease software, so you can feel confident that the software you choose to deploy is fully supported by Windows Azure and our partners. Sort options: We sort gallery images by what we think customers are most interested in, but sometimes you might want to sort using different views. So we’re providing some additional sort options, like “Newest,” to customize the image list for what suits you best. Pricing information: We now provide additional pricing information about images and options on how to cost effectively run them directly within the VM Gallery. The above improvements make it even easier to use the VM Gallery and quickly create launch and run Virtual Machines in the cloud. Virtual Machines: ACL Support for VIPs A few months ago we exposed the ability to configure Access Control Lists (ACLs) for Virtual Machines using Windows PowerShell cmdlets and our Service Management API. With today’s release, you can now configure VM ACLs using the Windows Azure Management Portal as well. You can now do this by clicking the new Manage ACL command in the Endpoints tab of a virtual machine instance: This will enable you to configure an ordered list of permit and deny rules to scope the traffic that can access your VM’s network endpoints. For example, if you were on a virtual network, you could limit RDP access to a Windows Azure virtual machine to only a few computers attached to your enterprise. Or if you weren’t on a virtual network you could alternatively limit traffic from public IPs that can access your workloads: Here is the default behaviors for ACLs in Windows Azure: By default (i.e. no rules specified), all traffic is permitted. When using only Permit rules, all other traffic is denied. When using only Deny rules, all other traffic is permitted. When there is a combination of Permit and Deny rules, all other traffic is denied. Lastly, remember that configuring endpoints does not automatically configure them within the VM if it also has firewall rules enabled at the OS level.  So if you create an endpoint using the Windows Azure Management Portal, Windows PowerShell, or REST API, be sure to also configure your guest VM firewall appropriately as well. Web Sites: Web Sockets Support With today’s release you can now use Web Sockets with Windows Azure Web Sites.  This feature enables you to easily integrate real-time communication scenarios within your web based applications, and is available at no extra charge (it even works with the free tier).  Higher level programming libraries like SignalR and socket.io are also now supported with it. You can enable Web Sockets support on a web site by navigating to the Configure tab of a Web Site, and by toggling Web Sockets support to “on”: Once Web Sockets is enabled you can start to integrate some really cool scenarios into your web applications.  Check out the new SignalR documentation hub on www.asp.net to learn more about some of the awesome scenarios you can do with it. Web Sites: Remote Debugging Support The Windows Azure SDK 2.2 we released two weeks ago introduced remote debugging support for Windows Azure Cloud Services. With today’s Windows Azure release we are extending this remote debugging support to also work with Windows Azure Web Sites. With live, remote debugging support inside of Visual Studio, you are able to have more visibility than ever before into how your code is operating live in Windows Azure. It is now super easy to attach the debugger and quickly see what is going on with your application in the cloud. Remote Debugging of a Windows Azure Web Site using VS 2013 Enabling the remote debugging of a Windows Azure Web Site using VS 2013 is really easy.  Start by opening up your web application’s project within Visual Studio. Then navigate to the “Server Explorer” tab within Visual Studio, and click on the deployed web-site you want to debug that is running within Windows Azure using the Windows Azure->Web Sites node in the Server Explorer.  Then right-click and choose the “Attach Debugger” option on it: When you do this Visual Studio will remotely attach the debugger to the Web Site running within Windows Azure.  The debugger will then stop the web site’s execution when it hits any break points that you have set within your web application’s project inside Visual Studio.  For example, below I set a breakpoint on the “ViewBag.Message” assignment statement within the HomeController of the standard ASP.NET MVC project template.  When I hit refresh on the “About” page of the web site within the browser, the breakpoint was triggered and I am now able to debug the app remotely using Visual Studio: Note above how we can debug variables (including autos/watchlist/etc), as well as use the Immediate and Command Windows. In the debug session above I used the Immediate Window to explore some of the request object state, as well as to dynamically change the ViewBag.Message property.  When we click the the “Continue” button (or press F5) the app will continue execution and the Web Site will render the content back to the browser.  This makes it super easy to debug web apps remotely. Tips for Better Debugging To get the best experience while debugging, we recommend publishing your site using the Debug configuration within Visual Studio’s Web Publish dialog. This will ensure that debug symbol information is uploaded to the Web Site which will enable a richer debug experience within Visual Studio.  You can find this option on the Web Publish dialog on the Settings tab: When you ultimately deploy/run the application in production we recommend using the “Release” configuration setting – the release configuration is memory optimized and will provide the best production performance.  To learn more about diagnosing and debugging Windows Azure Web Sites read our new Troubleshooting Windows Azure Web Sites in Visual Studio guide. Notification Hubs: Segmented Push Notification support with tag expressions In August we announced the General Availability of Windows Azure Notification Hubs - a powerful Mobile Push Notifications service that makes it easy to send high volume push notifications with low latency from any mobile app back-end.  Notification hubs can be used with any mobile app back-end (including ones built using our Mobile Services capability) and can also be used with back-ends that run in the cloud as well as on-premises. Beginning with the initial release, Notification Hubs allowed developers to send personalized push notifications to both individual users as well as groups of users by interest, by associating their devices with tags representing the logical target of the notification. For example, by registering all devices of customers interested in a favorite MLB team with a corresponding tag, it is possible to broadcast one message to millions of Boston Red Sox fans and another message to millions of St. Louis Cardinals fans with a single API call respectively. New support for using tag expressions to enable advanced customer segmentation With today’s release we are adding support for even more advanced customer targeting.  You can now identify customers that you want to send push notifications to by defining rich tag expressions. With tag expressions, you can now not only broadcast notifications to Boston Red Sox fans, but take that segmenting a step farther and reach more granular segments. This opens up a variety of scenarios, for example: Offers based on multiple preferences—e.g. send a game day vegetarian special to users tagged as both a Boston Red Sox fan AND a vegetarian Push content to multiple segments in a single message—e.g. rain delay information only to users who are tagged as either a Boston Red Sox fan OR a St. Louis Cardinal fan Avoid presenting subsets of a segment with irrelevant content—e.g. season ticket availability reminder to users who are tagged as a Boston Red Sox fan but NOT also a season ticket holder To illustrate with code, consider a restaurant chain app that sends an offer related to a Red Sox vs Cardinals game for users in Boston. Devices can be tagged by your app with location tags (e.g. “Loc:Boston”) and interest tags (e.g. “Follows:RedSox”, “Follows:Cardinals”), and then a notification can be sent by your back-end to “(Follows:RedSox || Follows:Cardinals) && Loc:Boston” in order to deliver an offer to all devices in Boston that follow either the RedSox or the Cardinals. This can be done directly in your server backend send logic using the code below: var notification = new WindowsNotification(messagePayload); hub.SendNotificationAsync(notification, "(Follows:RedSox || Follows:Cardinals) && Loc:Boston"); In your expressions you can use all Boolean operators: AND (&&), OR (||), and NOT (!).  Some other cool use cases for tag expressions that are now supported include: Social: To “all my group except me” - group:id && !user:id Events: Touchdown event is sent to everybody following either team or any of the players involved in the action: Followteam:A || Followteam:B || followplayer:1 || followplayer:2 … Hours: Send notifications at specific times. E.g. Tag devices with time zone and when it is 12pm in Seattle send to: GMT8 && follows:thaifood Versions and platforms: Send a reminder to people still using your first version for Android - version:1.0 && platform:Android For help on getting started with Notification Hubs, visit the Notification Hub documentation center.  Then download the latest NuGet package (or use the Notification Hubs REST APIs directly) to start sending push notifications using tag expressions.  They are really powerful and enable a bunch of great new scenarios. TFS & GIT: Continuous Delivery Support for Web Sites + Cloud Services With today’s Windows Azure release we are making it really easy to enable continuous delivery support with Windows Azure and Team Foundation Services.  Team Foundation Services is a cloud based offering from Microsoft that provides integrated source control (with both TFS and Git support), build server, test execution, collaboration tools, and agile planning support.  It makes it really easy to setup a team project (complete with automated builds and test runners) in the cloud, and it has really rich integration with Visual Studio. With today’s Windows Azure release it is now really easy to enable continuous delivery support with both TFS and Git based repositories hosted using Team Foundation Services.  This enables a workflow where when code is checked in, built successfully on an automated build server, and all tests pass on it – I can automatically have the app deployed on Windows Azure with zero manual intervention or work required. The below screen-shots demonstrate how to quickly setup a continuous delivery workflow to Windows Azure with a Git-based ASP.NET MVC project hosted using Team Foundation Services. Enabling Continuous Delivery to Windows Azure with Team Foundation Services The project I’m going to enable continuous delivery with is a simple ASP.NET MVC project whose source code I’m hosting using Team Foundation Services.  I did this by creating a “SimpleContinuousDeploymentTest” repository there using Git – and then used the new built-in Git tooling support within Visual Studio 2013 to push the source code to it.  Below is a screen-shot of the Git repository hosted within Team Foundation Services: I can access the repository within Visual Studio 2013 and easily make commits with it (as well as branch, merge and do other tasks).  Using VS 2013 I can also setup automated builds to take place in the cloud using Team Foundation Services every time someone checks in code to the repository: The cool thing about this is that I don’t have to buy or rent my own build server – Team Foundation Services automatically maintains its own build server farm and can automatically queue up a build for me (for free) every time someone checks in code using the above settings.  This build server (and automated testing) support now works with both TFS and Git based source control repositories. Connecting a Team Foundation Services project to Windows Azure Once I have a source repository hosted in Team Foundation Services with Automated Builds and Testing set up, I can then go even further and set it up so that it will be automatically deployed to Windows Azure when a source code commit is made to the repository (assuming the Build + Tests pass).  Enabling this is now really easy.  To set this up with a Windows Azure Web Site simply use the New->Compute->Web Site->Custom Create command inside the Windows Azure Management Portal.  This will create a dialog like below.  I gave the web site a name and then made sure the “Publish from source control” checkbox was selected: When we click next we’ll be prompted for the location of the source repository.  We’ll select “Team Foundation Services”: Once we do this we’ll be prompted for our Team Foundation Services account that our source repository is hosted under (in this case my TFS account is “scottguthrie”): When we click the “Authorize Now” button we’ll be prompted to give Windows Azure permissions to connect to the Team Foundation Services account.  Once we do this we’ll be prompted to pick the source repository we want to connect to.  Starting with today’s Windows Azure release you can now connect to both TFS and Git based source repositories.  This new support allows me to connect to the “SimpleContinuousDeploymentTest” respository we created earlier: Clicking the finish button will then create the Web Site with the continuous delivery hooks setup with Team Foundation Services.  Now every time someone pushes source control to the repository in Team Foundation Services, it will kick off an automated build, run all of the unit tests in the solution , and if they pass the app will be automatically deployed to our Web Site in Windows Azure.  You can monitor the history and status of these automated deployments using the Deployments tab within the Web Site: This enables a really slick continuous delivery workflow, and enables you to build and deploy apps in a really nice way. Developer Analytics: New Relic support for Web Sites + Mobile Services With today’s Windows Azure release we are making it really easy to enable Developer Analytics and Monitoring support with both Windows Azure Web Site and Windows Azure Mobile Services.  We are partnering with New Relic, who provide a great dev analytics and app performance monitoring offering, to enable this - and we have updated the Windows Azure Management Portal to make it really easy to configure. Enabling New Relic with a Windows Azure Web Site Enabling New Relic support with a Windows Azure Web Site is now really easy.  Simply navigate to the Configure tab of a Web Site and scroll down to the “developer analytics” section that is now within it: Clicking the “add-on” button will display some additional UI.  If you don’t already have a New Relic subscription, you can click the “view windows azure store” button to obtain a subscription (note: New Relic has a perpetually free tier so you can enable it even without paying anything): Clicking the “view windows azure store” button will launch the integrated Windows Azure Store experience we have within the Windows Azure Management Portal.  You can use this to browse from a variety of great add-on services – including New Relic: Select “New Relic” within the dialog above, then click the next button, and you’ll be able to choose which type of New Relic subscription you wish to purchase.  For this demo we’ll simply select the “Free Standard Version” – which does not cost anything and can be used forever:  Once we’ve signed-up for our New Relic subscription and added it to our Windows Azure account, we can go back to the Web Site’s configuration tab and choose to use the New Relic add-on with our Windows Azure Web Site.  We can do this by simply selecting it from the “add-on” dropdown (it is automatically populated within it once we have a New Relic subscription in our account): Clicking the “Save” button will then cause the Windows Azure Management Portal to automatically populate all of the needed New Relic configuration settings to our Web Site: Deploying the New Relic Agent as part of a Web Site The final step to enable developer analytics using New Relic is to add the New Relic runtime agent to our web app.  We can do this within Visual Studio by right-clicking on our web project and selecting the “Manage NuGet Packages” context menu: This will bring up the NuGet package manager.  You can search for “New Relic” within it to find the New Relic agent.  Note that there is both a 32-bit and 64-bit edition of it – make sure to install the version that matches how your Web Site is running within Windows Azure (note: you can configure your Web Site to run in either 32-bit or 64-bit mode using the Web Site’s “Configuration” tab within the Windows Azure Management Portal): Once we install the NuGet package we are all set to go.  We’ll simply re-publish the web site again to Windows Azure and New Relic will now automatically start monitoring the application Monitoring a Web Site using New Relic Now that the application has developer analytics support with New Relic enabled, we can launch the New Relic monitoring portal to start monitoring the health of it.  We can do this by clicking on the “Add Ons” tab in the left-hand side of the Windows Azure Management Portal.  Then select the New Relic add-on we signed-up for within it.  The Windows Azure Management Portal will provide some default information about the add-on when we do this.  Clicking the “Manage” button in the tray at the bottom will launch a new browser tab and single-sign us into the New Relic monitoring portal associated with our account: When we do this a new browser tab will launch with the New Relic admin tool loaded within it: We can now see insights into how our app is performing – without having to have written a single line of monitoring code.  The New Relic service provides a ton of great built-in monitoring features allowing us to quickly see: Performance times (including browser rendering speed) for the overall site and individual pages.  You can optionally set alert thresholds to trigger if the speed does not meet a threshold you specify. Information about where in the world your customers are hitting the site from (and how performance varies by region) Details on the latency performance of external services your web apps are using (for example: SQL, Storage, Twitter, etc) Error information including call stack details for exceptions that have occurred at runtime SQL Server profiling information – including which queries executed against your database and what their performance was And a whole bunch more… The cool thing about New Relic is that you don’t need to write monitoring code within your application to get all of the above reports (plus a lot more).  The New Relic agent automatically enables the CLR profiler within applications and automatically captures the information necessary to identify these.  This makes it super easy to get started and immediately have a rich developer analytics view for your solutions with very little effort. If you haven’t tried New Relic out yet with Windows Azure I recommend you do so – I think you’ll find it helps you build even better cloud applications.  Following the above steps will help you get started and deliver you a really good application monitoring solution in only minutes. Service Bus: Support for partitioned queues and topics With today’s release, we are enabling support within Service Bus for partitioned queues and topics. Enabling partitioning enables you to achieve a higher message throughput and better availability from your queues and topics. Higher message throughput is achieved by implementing multiple message brokers for each partitioned queue and topic.  The  multiple messaging stores will also provide higher availability. You can create a partitioned queue or topic by simply checking the Enable Partitioning option in the custom create wizard for a Queue or Topic: Read this article to learn more about partitioned queues and topics and how to take advantage of them today. Billing: New Billing Alert Service Today’s Windows Azure update enables a new Billing Alert Service Preview that enables you to get proactive email notifications when your Windows Azure bill goes above a certain monetary threshold that you configure.  This makes it easier to manage your bill and avoid potential surprises at the end of the month. With the Billing Alert Service Preview, you can now create email alerts to monitor and manage your monetary credits or your current bill total.  To set up an alert first sign-up for the free Billing Alert Service Preview.  Then visit the account management page, click on a subscription you have setup, and then navigate to the new Alerts tab that is available: The alerts tab allows you to setup email alerts that will be sent automatically once a certain threshold is hit.  For example, by clicking the “add alert” button above I can setup a rule to send myself email anytime my Windows Azure bill goes above $100 for the month: The Billing Alert Service will evolve to support additional aspects of your bill as well as support multiple forms of alerts such as SMS.  Try out the new Billing Alert Service Preview today and give us feedback. Summary Today’s Windows Azure release enables a ton of great new scenarios, and makes building applications hosted in the cloud even easier. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • Table sorting & pagination with jQuery and Razor in ASP.NET MVC

    - by hajan
    Introduction jQuery enjoys living inside pages which are built on top of ASP.NET MVC Framework. The ASP.NET MVC is a place where things are organized very well and it is quite hard to make them dirty, especially because the pattern enforces you on purity (you can still make it dirty if you want so ;) ). We all know how easy is to build a HTML table with a header row, footer row and table rows showing some data. With ASP.NET MVC we can do this pretty easy, but, the result will be pure HTML table which only shows data, but does not includes sorting, pagination or some other advanced features that we were used to have in the ASP.NET WebForms GridView. Ok, there is the WebGrid MVC Helper, but what if we want to make something from pure table in our own clean style? In one of my recent projects, I’ve been using the jQuery tablesorter and tablesorter.pager plugins that go along. You don’t need to know jQuery to make this work… You need to know little CSS to create nice design for your table, but of course you can use mine from the demo… So, what you will see in this blog is how to attach this plugin to your pure html table and a div for pagination and make your table with advanced sorting and pagination features.   Demo Project Resources The resources I’m using for this demo project are shown in the following solution explorer window print screen: Content/images – folder that contains all the up/down arrow images, pagination buttons etc. You can freely replace them with your own, but keep the names the same if you don’t want to change anything in the CSS we will built later. Content/Site.css – The main css theme, where we will add the theme for our table too Controllers/HomeController.cs – The controller I’m using for this project Models/Person.cs – For this demo, I’m using Person.cs class Scripts – jquery-1.4.4.min.js, jquery.tablesorter.js, jquery.tablesorter.pager.js – required script to make the magic happens Views/Home/Index.cshtml – Index view (razor view engine) the other items are not important for the demo. ASP.NET MVC 1. Model In this demo I use only one Person class which defines Person entity with several properties. You can use your own model, maybe one which will access data from database or any other resource. Person.cs public class Person {     public string Name { get; set; }     public string Surname { get; set; }     public string Email { get; set; }     public int? Phone { get; set; }     public DateTime? DateAdded { get; set; }     public int? Age { get; set; }     public Person(string name, string surname, string email,         int? phone, DateTime? dateadded, int? age)     {         Name = name;         Surname = surname;         Email = email;         Phone = phone;         DateAdded = dateadded;         Age = age;     } } 2. View In our example, we have only one Index.chtml page where Razor View engine is used. Razor view engine is my favorite for ASP.NET MVC because it’s very intuitive, fluid and keeps your code clean. 3. Controller Since this is simple example with one page, we use one HomeController.cs where we have two methods, one of ActionResult type (Index) and another GetPeople() used to create and return list of people. HomeController.cs public class HomeController : Controller {     //     // GET: /Home/     public ActionResult Index()     {         ViewBag.People = GetPeople();         return View();     }     public List<Person> GetPeople()     {         List<Person> listPeople = new List<Person>();                  listPeople.Add(new Person("Hajan", "Selmani", "[email protected]", 070070070,DateTime.Now, 25));                     listPeople.Add(new Person("Straight", "Dean", "[email protected]", 123456789, DateTime.Now.AddDays(-5), 35));         listPeople.Add(new Person("Karsen", "Livia", "[email protected]", 46874651, DateTime.Now.AddDays(-2), 31));         listPeople.Add(new Person("Ringer", "Anne", "[email protected]", null, DateTime.Now, null));         listPeople.Add(new Person("O'Leary", "Michael", "[email protected]", 32424344, DateTime.Now, 44));         listPeople.Add(new Person("Gringlesby", "Anne", "[email protected]", null, DateTime.Now.AddDays(-9), 18));         listPeople.Add(new Person("Locksley", "Stearns", "[email protected]", 2135345, DateTime.Now, null));         listPeople.Add(new Person("DeFrance", "Michel", "[email protected]", 235325352, DateTime.Now.AddDays(-18), null));         listPeople.Add(new Person("White", "Johnson", null, null, DateTime.Now.AddDays(-22), 55));         listPeople.Add(new Person("Panteley", "Sylvia", null, 23233223, DateTime.Now.AddDays(-1), 32));         listPeople.Add(new Person("Blotchet-Halls", "Reginald", null, 323243423, DateTime.Now, 26));         listPeople.Add(new Person("Merr", "South", "[email protected]", 3232442, DateTime.Now.AddDays(-5), 85));         listPeople.Add(new Person("MacFeather", "Stearns", "[email protected]", null, DateTime.Now, null));         return listPeople;     } }   TABLE CSS/HTML DESIGN Now, lets start with the implementation. First of all, lets create the table structure and the main CSS. 1. HTML Structure @{     Layout = null;     } <!DOCTYPE html> <html> <head>     <title>ASP.NET & jQuery</title>     <!-- referencing styles, scripts and writing custom js scripts will go here --> </head> <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th> value </th>                 </tr>             </thead>             <tbody>                 <tr>                     <td>value</td>                 </tr>             </tbody>             <tfoot>                 <tr>                     <th> value </th>                 </tr>             </tfoot>         </table>         <div id="pager">                      </div>     </div> </body> </html> So, this is the main structure you need to create for each of your tables where you want to apply the functionality we will create. Of course the scripts are referenced once ;). As you see, our table has class tablesorter and also we have a div with id pager. In the next steps we will use both these to create the needed functionalities. The complete Index.cshtml coded to get the data from controller and display in the page is: <body>     <div>         <table class="tablesorter">             <thead>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </thead>             <tbody>                 @{                     foreach (var p in ViewBag.People)                     {                                 <tr>                         <td>@p.Name</td>                         <td>@p.Surname</td>                         <td>@p.Email</td>                         <td>@p.Phone</td>                         <td>@p.DateAdded</td>                     </tr>                     }                 }             </tbody>             <tfoot>                 <tr>                     <th>Name</th>                     <th>Surname</th>                     <th>Email</th>                     <th>Phone</th>                     <th>Date Added</th>                 </tr>             </tfoot>         </table>         <div id="pager" style="position: none;">             <form>             <img src="@Url.Content("~/Content/images/first.png")" class="first" />             <img src="@Url.Content("~/Content/images/prev.png")" class="prev" />             <input type="text" class="pagedisplay" />             <img src="@Url.Content("~/Content/images/next.png")" class="next" />             <img src="@Url.Content("~/Content/images/last.png")" class="last" />             <select class="pagesize">                 <option selected="selected" value="5">5</option>                 <option value="10">10</option>                 <option value="20">20</option>                 <option value="30">30</option>                 <option value="40">40</option>             </select>             </form>         </div>     </div> </body> So, mainly the structure is the same. I have added @Razor code to create table with data retrieved from the ViewBag.People which has been filled with data in the home controller. 2. CSS Design The CSS code I’ve created is: /* DEMO TABLE */ body {     font-size: 75%;     font-family: Verdana, Tahoma, Arial, "Helvetica Neue", Helvetica, Sans-Serif;     color: #232323;     background-color: #fff; } table { border-spacing:0; border:1px solid gray;} table.tablesorter thead tr .header {     background-image: url(images/bg.png);     background-repeat: no-repeat;     background-position: center right;     cursor: pointer; } table.tablesorter tbody td {     color: #3D3D3D;     padding: 4px;     background-color: #FFF;     vertical-align: top; } table.tablesorter tbody tr.odd td {     background-color:#F0F0F6; } table.tablesorter thead tr .headerSortUp {     background-image: url(images/asc.png); } table.tablesorter thead tr .headerSortDown {     background-image: url(images/desc.png); } table th { width:150px;            border:1px outset gray;            background-color:#3C78B5;            color:White;            cursor:pointer; } table thead th:hover { background-color:Yellow; color:Black;} table td { width:150px; border:1px solid gray;} PAGINATION AND SORTING Now, when everything is ready and we have the data, lets make pagination and sorting functionalities 1. jQuery Scripts referencing <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.tablesorter.pager.js")" type="text/javascript"></script> 2. jQuery Sorting and Pagination script   <script type="text/javascript">     $(function () {         $("table.tablesorter").tablesorter({ widthFixed: true, sortList: [[0, 0]] })         .tablesorterPager({ container: $("#pager"), size: $(".pagesize option:selected").val() });     }); </script> So, with only two lines of code, I’m using both tablesorter and tablesorterPager plugins, giving some options to both these. Options added: tablesorter - widthFixed: true – gives fixed width of the columns tablesorter - sortList[[0,0]] – An array of instructions for per-column sorting and direction in the format: [[columnIndex, sortDirection], ... ] where columnIndex is a zero-based index for your columns left-to-right and sortDirection is 0 for Ascending and 1 for Descending. A valid argument that sorts ascending first by column 1 and then column 2 looks like: [[0,0],[1,0]] (source: http://tablesorter.com/docs/) tablesorterPager – container: $(“#pager”) – tells the pager container, the div with id pager in our case. tablesorterPager – size: the default size of each page, where I get the default value selected, so if you put selected to any other of the options in your select list, you will have this number of rows as default per page for the table too. END RESULTS 1. Table once the page is loaded (default results per page is 5 and is automatically sorted by 1st column as sortList is specified) 2. Sorted by Phone Descending 3. Changed pagination to 10 items per page 4. Sorted by Phone and Name (use SHIFT to sort on multiple columns) 5. Sorted by Date Added 6. Page 3, 5 items per page   ADDITIONAL ENHANCEMENTS We can do additional enhancements to the table. We can make search for each column. I will cover this in one of my next blogs. Stay tuned. DEMO PROJECT You can download demo project source code from HERE.CONCLUSION Once you finish with the demo, run your page and open the source code. You will be amazed of the purity of your code.Working with pagination in client side can be very useful. One of the benefits is performance, but if you have thousands of rows in your tables, you will get opposite result when talking about performance. Hence, sometimes it is nice idea to make pagination on back-end. So, the compromise between both approaches would be best to combine both of them. I use at most up to 500 rows on client-side and once the user reach the last page, we can trigger ajax postback which can get the next 500 rows using server-side pagination of the same data. I would like to recommend the following blog post http://weblogs.asp.net/gunnarpeipman/archive/2010/09/14/returning-paged-results-from-repositories-using-pagedresult-lt-t-gt.aspx, which will help you understand how to return page results from repository. I hope this was helpful post for you. Wait for my next posts ;). Please do let me know your feedback. Best Regards, Hajan

    Read the article

< Previous Page | 675 676 677 678 679 680 681 682 683  | Next Page >