Search Results

Search found 35689 results on 1428 pages for 'development mode'.

Page 328/1428 | < Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >

  • How to change disclosure style when user enters in edit mode of a UITableView?

    - by R31n4ld0
    Hello, guys. I have a UITableView that in 'normal' mode, show a UITableViewCellAccessoryDisclosureIndicator meaning if the user taps the row, another list is showed, like HIG says: "Disclosure indicator. When this element is present, users know they can tap anywhere in the row to see the next level in the hierarchy or the choices associated with the list item. Use a disclosure indicator in a row when selecting the row results in the display of another list. Don’t use a disclosure indicator to reveal detailed information about the list item; instead, use a detail disclosure button for this purpose." When the user tap the edit button in the top bar of the UITableView, I think I have to change the disclosure because if the user tap it, a view for changing the information of the current row is showed (see the bold line above), again, like HIG says: "Detail disclosure button. Users tap this element to see detailed information about the list item. (Note that you can use this element in views other than table views, to reveal additional details about something; see “Detail Disclosure Buttons” for more information.) In a table view, use a detail disclosure button in a row to display details about the list item. Note that the detail disclosure button, unlike the disclosure indicator, can perform an action that is separate from the selection of the row. For example, in Phone Favorites, tapping the row initiates a call to the contact; tapping the detail disclosure button in the row reveals more information about the contact." Have I miss understood the HIG, or I really do have to change the disclosure style in edit mode of UITableView? If yes, how I can intercept the edit mode when the user taps the Edit button? Thanks in advance.

    Read the article

  • In C/C++ mode in Emacs, change face of code in #if 0...#endif block to comment face

    - by pogopop77
    I'm trying to add functionality found in some other code editors to my Emacs configuration, whereby C/C++ code within #if 0...#endif blocks is automatically set to the comment face/font. Based on my testing, cpp-highlight-mode does something like what I want, but requires user action. It seems like tying into the font-lock functionality is the correct option to make the behavior automatic. I have successfully followed examples in the GNU documentation to change the face of single-line regular expressions. For example: (add-hook 'c-mode-common-hook (lambda () (font-lock-add-keywords nil '(("\\<\\(FIXME\\|TODO\\|HACK\\|fixme\\|todo\\|hack\\)" 1 font-lock-warning-face t))))) works fine to highlight debug related keywords anywhere in a file. However, I am having problems matching #if 0...#endif as a multiline regular expression. I found some useful information in this post (How to compose region like ""), that suggested that Emacs must be told specifically to allow for multiline matches. But this code: (add-hook 'c-mode-common-hook (lambda () '(progn (setq font-lock-multiline t) (font-lock-add-keywords nil '(("#if 0\\(.\\|\n\\)*?#endif" 1 font-lock-comment-face t)))))) still does not work for me. Perhaps my regular expression is wrong (though it appears to work using M-x re-builder), I've messed up my syntax, or I'm following the wrong approach entirely. I'm using Aquamacs 2.1 (which is based on GNU Emacs 23.2.50.1) on OS X 10.6.5, if that makes a difference. Any assistance would be appreciated!

    Read the article

  • How to R/W hard disk when CPU is in Protect Mode?

    - by smwikipedia
    I am doing some OS experiment. Until now, all my code utilized the real mode BIOS interrupt to manipulate hard disk and floppy. But once my code enabled the Protect Mode of the CPU, all the real mode BIOS interrupt service routine won't be available. How could I R/W the hard disk and floppy? I have a feeling that I need to do some hardware drivers now. Am I right? Is this why an OS is so difficult to develop? I know that hardwares are all controlled by reading from and writing to certain control or data registers. For example, I know that the Command Block Registers of hard disk range from 0x1F0 to 0x1F7. But I am wondering whether the register addresses of so many different hardwares are the same on the PC platform? Or do I have to detect that before using them? How to detect them?? For any responses I present my deep appreciation.

    Read the article

  • Web Browser Control &ndash; Specifying the IE Version

    - by Rick Strahl
    I use the Internet Explorer Web Browser Control in a lot of my applications to display document type layout. HTML happens to be one of the most common document formats and displaying data in this format – even in desktop applications, is often way easier than using normal desktop technologies. One issue the Web Browser Control has that it’s perpetually stuck in IE 7 rendering mode by default. Even though IE 8 and now 9 have significantly upgraded the IE rendering engine to be more CSS and HTML compliant by default the Web Browser control will have none of it. IE 9 in particular – with its much improved CSS support and basic HTML 5 support is a big improvement and even though the IE control uses some of IE’s internal rendering technology it’s still stuck in the old IE 7 rendering by default. This applies whether you’re using the Web Browser control in a WPF application, a WinForms app, a FoxPro or VB classic application using the ActiveX control. Behind the scenes all these UI platforms use the COM interfaces and so you’re stuck by those same rules. Rendering Challenged To see what I’m talking about here are two screen shots rendering an HTML 5 doctype page that includes some CSS 3 functionality – rounded corners and border shadows - from an earlier post. One uses IE 9 as a standalone browser, and one uses a simple WPF form that includes the Web Browser control. IE 9 Browser:   Web Browser control in a WPF form: The IE 9 page displays this HTML correctly – you see the rounded corners and shadow displayed. Obviously the latter rendering using the Web Browser control in a WPF application is a bit lacking. Not only are the new CSS features missing but the page also renders in Internet Explorer’s quirks mode so all the margins, padding etc. behave differently by default, even though there’s a CSS reset applied on this page. If you’re building an application that intends to use the Web Browser control for a live preview of some HTML this is clearly undesirable. Feature Delegation via Registry Hacks Fortunately starting with Internet Explore 8 and later there’s a fix for this problem via a registry setting. You can specify a registry key to specify which rendering mode and version of IE should be used by that application. These are not global mind you – they have to be enabled for each application individually. There are two different sets of keys for 32 bit and 64 bit applications. 32 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe 64 bit: HKEY_LOCAL_MACHINE\SOFTWARE\Wow6432Node\Microsoft\Internet Explorer\MAIN\FeatureControl\FEATURE_BROWSER_EMULATION Value Key: yourapplication.exe The value to set this key to is (taken from MSDN here) as decimal values: 9999 (0x270F) Internet Explorer 9. Webpages are displayed in IE9 Standards mode, regardless of the !DOCTYPE directive. 9000 (0x2328) Internet Explorer 9. Webpages containing standards-based !DOCTYPE directives are displayed in IE9 mode. 8888 (0x22B8) Webpages are displayed in IE8 Standards mode, regardless of the !DOCTYPE directive. 8000 (0x1F40) Webpages containing standards-based !DOCTYPE directives are displayed in IE8 mode. 7000 (0x1B58) Webpages containing standards-based !DOCTYPE directives are displayed in IE7 Standards mode.   The added key looks something like this in the Registry Editor: With this in place my Html Html Help Builder application which has wwhelp.exe as its main executable now works with HTML 5 and CSS 3 documents in the same way that Internet Explorer 9 does. Incidentally I accidentally added an ‘empty’ DWORD value of 0 to my EXE name and that worked as well giving me IE 9 rendering. Although not documented I suspect 0 (or an invalid value) will default to the installed browser. Don’t have a good way to test this but if somebody could try this with IE 8 installed that would be great: What happens when setting 9000 with IE 8 installed? What happens when setting 0 with IE 8 installed? Don’t forget to add Keys for Host Environments If you’re developing your application in Visual Studio and you run the debugger you may find that your application is still not rendering right, but if you run the actual generated EXE from Explorer or the OS command prompt it works. That’s because when you run the debugger in Visual Studio it wraps your application into a debugging host container. For this reason you might want to also add another registry key for yourapp.vshost.exe on your development machine. If you’re developing in Visual FoxPro make sure you add a key for vfp9.exe to see the rendering adjustments in the Visual FoxPro development environment. Cleaner HTML - no more HTML mangling! There are a number of additional benefits to setting up rendering of the Web Browser control to the IE 9 engine (or even the IE 8 engine) beyond the obvious rendering functionality. IE 9 actually returns your HTML in something that resembles the original HTML formatting, as opposed to the IE 7 default format which mangled the original HTML content. If you do the following in the WPF application: private void button2_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; MessageBox.Show(doc.body.outerHtml); } you get different output depending on the rendering mode active. With the default IE 7 rendering you get: <BODY><DIV> <H1>Rounded Corners and Shadows - Creating Dialogs in CSS</H1> <DIV class=toolbarcontainer><A class=hoverbutton href="./"><IMG src="../../css/images/home.gif"> Home</A> <A class=hoverbutton href="RoundedCornersAndShadows.htm"><IMG src="../../css/images/refresh.gif"> Refresh</A> </DIV> <DIV class=containercontent> <FIELDSET><LEGEND>Plain Box</LEGEND><!-- Simple Box with rounded corners and shadow --> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Box with Header</LEGEND> <DIV style="BORDER-BOTTOM: steelblue 2px solid; BORDER-LEFT: steelblue 2px solid; WIDTH: 550px; BORDER-TOP: steelblue 2px solid; BORDER-RIGHT: steelblue 2px solid" class="roundbox boxshadow"> <DIV class="gridheaderleft roundbox-top">Box with a Header</DIV> <DIV style="BACKGROUND: khaki" class="boxcontenttext roundbox-bottom">Simple Rounded Corner Box. </DIV></DIV></FIELDSET> <FIELDSET><LEGEND>Dialog Style Window</LEGEND> <DIV style="POSITION: relative; WIDTH: 450px" id=divDialog class="dialog boxshadow" jQuery16107208195684204002="2"> <DIV style="POSITION: relative" class=dialog-header> <DIV class=closebox></DIV>User Sign-in <DIV class=closebox jQuery16107208195684204002="3"></DIV></DIV> <DIV class=descriptionheader>This dialog is draggable and closable</DIV> <DIV class=dialog-content><LABEL>Username:</LABEL> <INPUT name=txtUsername value=" "> <LABEL>Password</LABEL> <INPUT name=txtPassword value=" "> <HR> <INPUT id=btnLogin value=Login type=button> </DIV> <DIV class=dialog-statusbar>Ready</DIV></DIV></FIELDSET> </DIV> <SCRIPT type=text/javascript>     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </SCRIPT> </DIV></BODY> Now lest you think I’m out of my mind and create complete whacky HTML rooted in the last century, here’s the IE 9 rendering mode output which looks a heck of a lot cleaner and a lot closer to my original HTML of the page I’m accessing: <body> <div>         <h1>Rounded Corners and Shadows - Creating Dialogs in CSS</h1>     <div class="toolbarcontainer">         <a class="hoverbutton" href="./"> <img src="../../css/images/home.gif"> Home</a>         <a class="hoverbutton" href="RoundedCornersAndShadows.htm"> <img src="../../css/images/refresh.gif"> Refresh</a>     </div>         <div class="containercontent">     <fieldset>         <legend>Plain Box</legend>                <!-- Simple Box with rounded corners and shadow -->             <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                              <div style="background: khaki;" class="boxcontenttext roundbox">                     Simple Rounded Corner Box.                 </div>             </div>     </fieldset>     <fieldset>         <legend>Box with Header</legend>         <div style="border: 2px solid steelblue; width: 550px;" class="roundbox boxshadow">                          <div class="gridheaderleft roundbox-top">Box with a Header</div>             <div style="background: khaki;" class="boxcontenttext roundbox-bottom">                 Simple Rounded Corner Box.             </div>         </div>     </fieldset>       <fieldset>         <legend>Dialog Style Window</legend>         <div style="width: 450px; position: relative;" id="divDialog" class="dialog boxshadow">             <div style="position: relative;" class="dialog-header">                 <div class="closebox"></div>                 User Sign-in             <div class="closebox"></div></div>             <div class="descriptionheader">This dialog is draggable and closable</div>                    <div class="dialog-content">                             <label>Username:</label>                 <input name="txtUsername" value=" " type="text">                 <label>Password</label>                 <input name="txtPassword" value=" " type="text">                                 <hr/>                                 <input id="btnLogin" value="Login" type="button">                        </div>             <div class="dialog-statusbar">Ready</div>         </div>     </fieldset>     </div> <script type="text/javascript">     $(document).ready(function () {         $("#divDialog")             .draggable({ handle: ".dialog-header" })             .closable({ handle: ".dialog-header",                 closeHandler: function () {                     alert("Window about to be closed.");                     return true;  // true closes - false leaves open                 }             });     }); </script>        </div> </body> IOW, in IE9 rendering mode IE9 is much closer (but not identical) to the original HTML from the page on the Web that we’re reading from. As a side note: Unfortunately, the browser feature emulation can't be applied against the Html Help (CHM) Engine in Windows which uses the Web Browser control (or COM interfaces anyway) to render Html Help content. I tried setting up hh.exe which is the help viewer, to use IE 9 rendering but a help file generated with CSS3 features will simply show in IE 7 mode. Bummer - this would have been a nice quick fix to allow help content served from CHM files to look better. HTML Editing leaves HTML formatting intact In the same vane, if you do any inline HTML editing in the control by setting content to be editable, IE 9’s control does a much more reasonable job of creating usable and somewhat valid HTML. It also leaves the original content alone other than the text your are editing or adding. No longer is the HTML output stripped of excess spaces and reformatted in IEs format. So if I do: private void button3_Click(object sender, RoutedEventArgs e) { dynamic doc = this.webBrowser.Document; doc.body.contentEditable = true; } and then make some changes to the document by typing into it using IE 9 mode, the document formatting stays intact and only the affected content is modified. The created HTML is reasonably clean (although it does lack proper XHTML formatting for things like <br/> <hr/>). This is very different from IE 7 mode which mangled the HTML as soon as the page was loaded into the control. Any editing you did stripped out all white space and lost all of your existing XHTML formatting. In IE 9 mode at least *most* of your original formatting stays intact. This is huge! In Html Help Builder I have supported HTML editing for a long time but the HTML mangling by the Web Browser control made it very difficult to edit the HTML later. Previously IE would mangle the HTML by stripping out spaces, upper casing all tags and converting many XHTML safe tags to its HTML 3 tags. Now IE leaves most of my document alone while editing, and creates cleaner and more compliant markup (with exception of self-closing elements like BR/HR). The end result is that I now have HTML editing in place that's much cleaner and actually capable of being manually edited. Caveats, Caveats, Caveats It wouldn't be Internet Explorer if there weren't some major compatibility issues involved in using this various browser version interaction. The biggest thing I ran into is that there are odd differences in some of the COM interfaces and what they return. I specifically ran into a problem with the document.selection.createRange() function which with IE 7 compatibility returns an expected text range object. When running in IE 8 or IE 9 mode however. I could not retrieve a valid text range with this code where loEdit is the WebBrowser control: loRange = loEdit.document.selection.CreateRange() The loRange object returned (here in FoxPro) had a length property of 0 but none of the other properties of the TextRange or TextRangeCollection objects were available. I figured this was due to some changed security settings but even after elevating the Intranet Security Zone and mucking with the other browser feature flags pertaining to security I had no luck. In the end I relented and used a JavaScript function in my editor document that returns a selection range object: function getselectionrange() { var range = document.selection.createRange(); return range; } and call that JavaScript function from my host applications code: *** Use a function in the document to get around HTML Editing issues loRange = loEdit.document.parentWindow.getselectionrange(.f.) and that does work correctly. This wasn't a big deal as I'm already loading a support script file into the editor page so all I had to do is add the function to this existing script file. You can find out more how to call script code in the Web Browser control from a host application in a previous post of mine. IE 8 and 9 also clamp down the security environment a little more than the default IE 7 control, so there may be other issues you run into. Other than the createRange() problem above I haven't seen anything else that is breaking in my code so far though and that's encouraging at least since it uses a lot of HTML document manipulation for the custom editor I've created (and would love to replace - any PROFESSIONAL alternatives anybody?) Registry Key Installation for your Application It’s important to remember that this registry setting is made per application, so most likely this is something you want to set up with your installer. Also remember that 32 and 64 bit settings require separate settings in the registry so if you’re creating your installer you most likely will want to set both keys in the registry preemptively for your application. I use Tarma Installer for all of my application installs and in Tarma I configure registry keys for both and set a flag to only install the latter key group in the 64 bit version: Because this setting is application specific you have to do this for every application you install unfortunately, but this also means that you can safely configure this setting in the registry because it is after only applied to your application. Another problem with install based installation is version detection. If IE 8 is installed I’d want 8000 for the value, if IE 9 is installed I want 9000. I can do this easily in code but in the installer this is much more difficult. I don’t have a good solution for this at the moment, but given that the app works with IE 7 mode now, IE 9 mode is just a bonus for the moment. If IE 9 is not installed and 9000 is used the default rendering will remain in use.   It sure would be nice if we could specify the IE rendering mode as a property, but I suspect the ActiveX container has to know before it loads what actual version to load up and once loaded can only load a single version of IE. This would account for this annoying application level configuration… Summary The registry feature emulation has been available for quite some time, but I just found out about it today and started experimenting around with it. I’m stoked to see that this is available as I’d pretty much given up in ever seeing any better rendering in the Web Browser control. Now at least my apps can take advantage of newer HTML features. Now if we could only get better HTML Editing support somehow <snicker>… ah can’t have everything.© Rick Strahl, West Wind Technologies, 2005-2011Posted in .NET  FoxPro  Windows  

    Read the article

  • Do You Develop Your PL/SQL Directly in the Database?

    - by thatjeffsmith
    I know this sounds like a REALLY weird question for many of you. Let me make one thing clear right away though, I am NOT talking about creating and replacing PLSQL objects directly into a production environment. Do we really need to talk about developers in production again? No, what I am talking about is a developer doing their work from start to finish in a development database. These are generally available to a development team for building the next and greatest version of your databases and database applications. And of course you are using a third party source control system, right? Last week I was in Tampa, FL presenting at the monthly Suncoast Oracle User’s Group meeting. Had a wonderful time, great questions and back-and-forth. My favorite heckler was there, @oraclenered, AKA Chet Justice.  I was in the middle of talking about how it’s better to do your PLSQL work in the Procedure Editor when Chet pipes up - Don’t do it that way, that’s wrong Just press play to edit the PLSQL directly in the database Or something along those lines. I didn’t get what the heck he was talking about. I had been showing how the Procedure Editor gives you much better feedback and support when working with PLSQL. After a few back-and-forths I got to what Chet’s main objection was, and again I’m going to paraphrase: You should develop offline in your SQL worksheet. Don’t do anything in the database until it’s done. I didn’t understand. Were developers expected to be able to internalize and mentally model the PL/SQL engine, see where their errors were, etc in these offline scripts? No, please give Chet more credit than that. What is the ideal Oracle Development Environment? If I were back in the ‘real world’ of database development, I would do all of my development outside of the ‘dev’ instance. My development process looks a little something like this: Do I have a program that already does something like this – copy and paste Has some smart person already written something like this – copy and paste Start typing in the white-screen-of-panic and bungle along until I get something that half-works Tweek, debug, test until I have fooled my subconscious into thinking that it’s ‘good’ As you might understand, I don’t want my co-workers to see the evolution of my code. It would seriously freak them out and I probably wouldn’t have a job anymore (don’t remind me that I already worked myself out of development.) So here’s what I like to do: Run a Local Instance of Oracle on my Machine and Develop My Code Privately I take a copy of development – that’s what source control is for afterall – and run it where no one else can see it. I now get to be my own DBA. If I need a trace – no problem. If I want to run an ASH report, no worries. If I need to create a directory or run some DataPump jobs, that’s all on me. Now when I get my code ‘up to snuff,’ then I will check it into source control and compile it into the official development instance. So my teammates suddenly go from seeing no program, to a mostly complete program. Is this right? If not, it doesn’t seem wrong to me. And after talking to Chet in the car on the way to the local cigar bar, it seems that he’s of the same opinion. So what’s so wrong with coding directly into a development instance? I think ‘wrong’ is a bit strong here. But there are a few pitfalls that you might want to look out for. A few come to mind – and I’m sure Chet could add many more as my memory fails me at the moment. But here goes: Development instance isn’t properly backed up – would hate to lose that work Development is wiped once a week and copied over from Prod – don’t laugh Someone clobbers your code You accidentally on purpose clobber someone else’s code The more developers you have in a single fish pond, the greater chance something ‘bad’ will happen This Isn’t One of Those Posts Where I Tell You What You Should Be Doing I realize many shops won’t be open to allowing developers to stage their own local copies of Oracle. But I would at least be aware that many of your developers are probably doing this anyway – with or without your tacit approval. SQL Developer can do local file tracking, but you should be using Source Control too! I will say that I think it’s imperative that you control your source code outside the database, even if your development team is comprised of a single developer. Store your source code in a file, and control that file in something like Subversion. You would be shocked at the number of teams that do not use a source control system. I know I continue to be shocked no matter how many times I meet another team running by the seat-of-their-pants. I’d love to hear how your development process works. And of course I want to know how SQL Developer and the rest of our tools can better support your processes. And one last thing, if you want a fun and interactive presentation experience, be sure to have Chet in the room

    Read the article

  • Microsoft Introduces WebMatrix

    - by Rick Strahl
    originally published in CoDe Magazine Editorial Microsoft recently released the first CTP of a new development environment called WebMatrix, which along with some of its supporting technologies are squarely aimed at making the Microsoft Web Platform more approachable for first-time developers and hobbyists. But in the process, it also provides some updated technologies that can make life easier for existing .NET developers. Let’s face it: ASP.NET development isn’t exactly trivial unless you already have a fair bit of familiarity with sophisticated development practices. Stick a non-developer in front of Visual Studio .NET or even the Visual Web Developer Express edition and it’s not likely that the person in front of the screen will be very productive or feel inspired. Yet other technologies like PHP and even classic ASP did provide the ability for non-developers and hobbyists to become reasonably proficient in creating basic web content quickly and efficiently. WebMatrix appears to be Microsoft’s attempt to bring back some of that simplicity with a number of technologies and tools. The key is to provide a friendly and fully self-contained development environment that provides all the tools needed to build an application in one place, as well as tools that allow publishing of content and databases easily to the web server. WebMatrix is made up of several components and technologies: IIS Developer Express IIS Developer Express is a new, self-contained development web server that is fully compatible with IIS 7.5 and based on the same codebase that IIS 7.5 uses. This new development server replaces the much less compatible Cassini web server that’s been used in Visual Studio and the Express editions. IIS Express addresses a few shortcomings of the Cassini server such as the inability to serve custom ISAPI extensions (i.e., things like PHP or ASP classic for example), as well as not supporting advanced authentication. IIS Developer Express provides most of the IIS 7.5 feature set providing much better compatibility between development and live deployment scenarios. SQL Server Compact 4.0 Database access is a key component for most web-driven applications, but on the Microsoft stack this has mostly meant you have to use SQL Server or SQL Server Express. SQL Server Compact is not new-it’s been around for a few years, but it’s been severely hobbled in the past by terrible tool support and the inability to support more than a single connection in Microsoft’s attempt to avoid losing SQL Server licensing. The new release of SQL Server Compact 4.0 supports multiple connections and you can run it in ASP.NET web applications simply by installing an assembly into the bin folder of the web application. In effect, you don’t have to install a special system configuration to run SQL Compact as it is a drop-in database engine: Copy the small assembly into your BIN folder (or from the GAC if installed fully), create a connection string against a local file-based database file, and then start firing SQL requests. Additionally WebMatrix includes nice tools to edit the database tables and files, along with tools to easily upsize (and hopefully downsize in the future) to full SQL Server. This is a big win, pending compatibility and performance limits. In my simple testing the data engine performed well enough for small data sets. This is not only useful for web applications, but also for desktop applications for which a fully installed SQL engine like SQL Server would be overkill. Having a local data store in those applications that can potentially be accessed by multiple users is a welcome feature. ASP.NET Razor View Engine What? Yet another native ASP.NET view engine? We already have Web Forms and various different flavors of using that view engine with Web Forms and MVC. Do we really need another? Microsoft thinks so, and Razor is an implementation of a lightweight, script-only view engine. Unlike the Web Forms view engine, Razor works only with inline code, snippets, and markup; therefore, it is more in line with current thinking of what a view engine should represent. There’s no support for a “page model” or any of the other Web Forms features of the full-page framework, but just a lightweight scripting engine that works with plain markup plus embedded expressions and code. The markup syntax for Razor is geared for minimal typing, plus some progressive detection of where a script block/expression starts and ends. This results in a much leaner syntax than the typical ASP.NET Web Forms alligator (<% %>) tags. Razor uses the @ sign plus standard C# (or Visual Basic) block syntax to delineate code snippets and expressions. Here’s a very simple example of what Razor markup looks like along with some comment annotations: <!DOCTYPE html> <html>     <head>         <title></title>     </head>     <body>     <h1>Razor Test</h1>          <!-- simple expressions -->     @DateTime.Now     <hr />     <!-- method expressions -->     @DateTime.Now.ToString("T")          <!-- code blocks -->     @{         List<string> names = new List<string>();         names.Add("Rick");         names.Add("Markus");         names.Add("Claudio");         names.Add("Kevin");     }          <!-- structured block statements -->     <ul>     @foreach(string name in names){             <li>@name</li>     }     </ul>           <!-- Conditional code -->        @if(true) {                        <!-- Literal Text embedding in code -->        <text>         true        </text>;    }    else    {        <!-- Literal Text embedding in code -->       <text>       false       </text>;    }    </body> </html> Like the Web Forms view engine, Razor parses pages into code, and then executes that run-time compiled code. Effectively a “page” becomes a code file with markup becoming literal text written into the Response stream, code snippets becoming raw code, and expressions being written out with Response.Write(). The code generated from Razor doesn’t look much different from similar Web Forms code that only uses script tags; so although the syntax may look different, the operational model is fairly similar to the Web Forms engine minus the overhead of the large Page object model. However, there are differences: -Razor pages are based on a new base class, Microsoft.WebPages.WebPage, which is hosted in the Microsoft.WebPages assembly that houses all the Razor engine parsing and processing logic. Browsing through the assembly (in the generated ASP.NET Temporary Files folder or GAC) will give you a good idea of the functionality that Razor provides. If you look closely, a lot of the feature set matches ASP.NET MVC’s view implementation as well as many of the helper classes found in MVC. It’s not hard to guess the motivation for this sort of view engine: For beginning developers the simple markup syntax is easier to work with, although you obviously still need to have some understanding of the .NET Framework in order to create dynamic content. The syntax is easier to read and grok and much shorter to type than ASP.NET alligator tags (<% %>) and also easier to understand aesthetically what’s happening in the markup code. Razor also is a better fit for Microsoft’s vision of ASP.NET MVC: It’s a new view engine without the baggage of Web Forms attached to it. The engine is more lightweight since it doesn’t carry all the features and object model of Web Forms with it and it can be instantiated directly outside of the HTTP environment, which has been rather tricky to do for the Web Forms view engine. Having a standalone script parser is a huge win for other applications as well – it makes it much easier to create script or meta driven output generators for many types of applications from code/screen generators, to simple form letters to data merging applications with user customizability. For me personally this is very useful side effect and who knows maybe Microsoft will actually standardize they’re scripting engines (die T4 die!) on this engine. Razor also better fits the “view-based” approach where the view is supposed to be mostly a visual representation that doesn’t hold much, if any, code. While you can still use code, the code you do write has to be self-contained. Overall I wouldn’t be surprised if Razor will become the new standard view engine for MVC in the future – and in fact there have been announcements recently that Razor will become the default script engine in ASP.NET MVC 3.0. Razor can also be used in existing Web Forms and MVC applications, although that’s not working currently unless you manually configure the script mappings and add the appropriate assemblies. It’s possible to do it, but it’s probably better to wait until Microsoft releases official support for Razor scripts in Visual Studio. Once that happens, you can simply drop .cshtml and .vbhtml pages into an existing ASP.NET project and they will work side by side with classic ASP.NET pages. WebMatrix Development Environment To tie all of these three technologies together, Microsoft is shipping WebMatrix with an integrated development environment. An integrated gallery manager makes it easy to download and load existing projects, and then extend them with custom functionality. It seems to be a prominent goal to provide community-oriented content that can act as a starting point, be it via a custom templates or a complete standard application. The IDE includes a project manager that works with a single project and provides an integrated IDE/editor for editing the .cshtml and .vbhtml pages. A run button allows you to quickly run pages in the project manager in a variety of browsers. There’s no debugging support for code at this time. Note that Razor pages don’t require explicit compilation, so making a change, saving, and then refreshing your page in the browser is all that’s needed to see changes while testing an application locally. It’s essentially using the auto-compiling Web Project that was introduced with .NET 2.0. All code is compiled during run time into dynamically created assemblies in the ASP.NET temp folder. WebMatrix also has PHP Editing support with syntax highlighting. You can load various PHP-based applications from the WebMatrix Web Gallery directly into the IDE. Most of the Web Gallery applications are ready to install and run without further configuration, with Wizards taking you through installation of tools, dependencies, and configuration of the database as needed. WebMatrix leverages the Web Platform installer to pull the pieces down from websites in a tight integration of tools that worked nicely for the four or five applications I tried this out on. Click a couple of check boxes and fill in a few simple configuration options and you end up with a running application that’s ready to be customized. Nice! You can easily deploy completed applications via WebDeploy (to an IIS server) or FTP directly from within the development environment. The deploy tool also can handle automatically uploading and installing the database and all related assemblies required, making deployment a simple one-click install step. Simplified Database Access The IDE contains a database editor that can edit SQL Compact and SQL Server databases. There is also a Database helper class that facilitates database access by providing easy-to-use, high-level query execution and iteration methods: @{       var db = Database.OpenFile("FirstApp.sdf");     string sql = "select * from customers where Id > @0"; } <ul> @foreach(var row in db.Query(sql,1)){         <li>@row.FirstName @row.LastName</li> } </ul> The query function takes a SQL statement plus any number of positional (@0,@1 etc.) SQL parameters by simple values. The result is returned as a collection of rows which in turn have a row object with dynamic properties for each of the columns giving easy (though untyped) access to each of the fields. Likewise Execute and ExecuteNonQuery allow execution of more complex queries using similar parameter passing schemes. Note these queries use string-based queries rather than LINQ or Entity Framework’s strongly typed LINQ queries. While this may seem like a step back, it’s also in line with the expectations of non .NET script developers who are quite used to writing and using SQL strings in code rather than using OR/M frameworks. The only question is why was something not included from the beginning in .NET and Microsoft made developers build custom implementations of these basic building blocks. The implementation looks a lot like a DataTable-style data access mechanism, but to be fair, this is a common approach in scripting languages. This type of syntax that uses simple, static, data object methods to perform simple data tasks with one line of code are common in scripting languages and are a good match for folks working in PHP/Python, etc. Seems like Microsoft has taken great advantage of .NET 4.0’s dynamic typing to provide this sort of interface for row iteration where each row has properties for each field. FWIW, all the examples demonstrate using local SQL Compact files - I was unable to get a SQL Server connection string to work with the Database class (the connection string wasn’t accepted). However, since the code in the page is still plain old .NET, you can easily use standard ADO.NET code or even LINQ or Entity Framework models that are created outside of WebMatrix in separate assemblies as required. The good the bad the obnoxious - It’s still .NET The beauty (or curse depending on how you look at it :)) of Razor and the compilation model is that, behind it all, it’s still .NET. Although the syntax may look foreign, it’s still all .NET behind the scenes. You can easily access existing tools, helpers, and utilities simply by adding them to the project as references or to the bin folder. Razor automatically recognizes any assembly reference from assemblies in the bin folder. In the default configuration, Microsoft provides a host of helper functions in a Microsoft.WebPages assembly (check it out in the ASP.NET temp folder for your application), which includes a host of HTML Helpers. If you’ve used ASP.NET MVC before, a lot of the helpers should look familiar. Documentation at the moment is sketchy-there’s a very rough API reference you can check out here: http://www.asp.net/webmatrix/tutorials/asp-net-web-pages-api-reference Who needs WebMatrix? Uhm… good Question Clearly Microsoft is trying hard to create an environment with WebMatrix that is easy to use for newbie developers. The goal seems to be simplicity in providing a minimal development environment and an easy-to-use script engine/language that makes it easy to get started with. There’s also some focus on community features that can be used as starting points, such as Web Gallery applications and templates. The community features in particular are very nice and something that would be nice to eventually see in Visual Studio as well. The question is whether this is too little too late. Developers who have been clamoring for a simpler development environment on the .NET stack have mostly left for other simpler platforms like PHP or Python which are catering to the down and dirty developer. Microsoft will be hard pressed to win those folks-and other hardcore PHP developers-back. Regardless of how much you dress up a script engine fronted by the .NET Framework, it’s still the .NET Framework and all the complexity that drives it. While .NET is a fine solution in its breadth and features once you get a basic handle on the core features, the bar of entry to being productive with the .NET Framework is still pretty high. The MVC style helpers Microsoft provides are a good step in the right direction, but I suspect it’s not enough to shield new developers from having to delve much deeper into the Framework to get even basic applications built. Razor and its helpers is trying to make .NET more accessible but the reality is that in order to do useful stuff that goes beyond the handful of simple helpers you still are going to have to write some C# or VB or other .NET code. If the target is a hobby/amateur/non-programmer the learning curve isn’t made any easier by WebMatrix it’s just been shifted a tad bit further along in your development endeavor when you run out of canned components that are supplied either by Microsoft or the community. The database helpers are interesting and actually I’ve heard a lot of discussion from various developers who’ve been resisting .NET for a really long time perking up at the prospect of easier data access in .NET than the ridiculous amount of code it takes to do even simple data access with raw ADO.NET. It seems sad that such a simple concept and implementation should trigger this sort of response (especially since it’s practically trivial to create helpers like these or pick them up from countless libraries available), but there it is. It also shows that there are plenty of developers out there who are more interested in ‘getting stuff done’ easily than necessarily following the latest and greatest practices which are overkill for many development scenarios. Sometimes it seems that all of .NET is focused on the big life changing issues of development, rather than the bread and butter scenarios that many developers are interested in to get their work accomplished. And that in the end may be WebMatrix’s main raison d'être: To bring some focus back at Microsoft that simpler and more high level solutions are actually needed to appeal to the non-high end developers as well as providing the necessary tools for the high end developers who want to follow the latest and greatest trends. The current version of WebMatrix hits many sweet spots, but it also feels like it has a long way to go before it really can be a tool that a beginning developer or an accomplished developer can feel comfortable with. Although there are some really good ideas in the environment (like the gallery for downloading apps and components) which would be a great addition for Visual Studio as well, the rest of the development environment just feels like crippleware with required functionality missing especially debugging and Intellisense, but also general editor support. It’s not clear whether these are because the product is still in an early alpha release or whether it’s simply designed that way to be a really limited development environment. While simple can be good, nobody wants to feel left out when it comes to necessary tool support and WebMatrix just has that left out feeling to it. If anything WebMatrix’s technology pieces (which are really independent of the WebMatrix product) are what are interesting to developers in general. The compact IIS implementation is a nice improvement for development scenarios and SQL Compact 4.0 seems to address a lot of concerns that people have had and have complained about for some time with previous SQL Compact implementations. By far the most interesting and useful technology though seems to be the Razor view engine for its light weight implementation and it’s decoupling from the ASP.NET/HTTP pipeline to provide a standalone scripting/view engine that is pluggable. The first winner of this is going to be ASP.NET MVC which can now have a cleaner view model that isn’t inconsistent due to the baggage of non-implemented WebForms features that don’t work in MVC. But I expect that Razor will end up in many other applications as a scripting and code generation engine eventually. Visual Studio integration for Razor is currently missing, but is promised for a later release. The ASP.NET MVC team has already mentioned that Razor will eventually become the default MVC view engine, which will guarantee continued growth and development of this tool along those lines. And the Razor engine and support tools actually inherit many of the features that MVC pioneered, so there’s some synergy flowing both ways between Razor and MVC. As an existing ASP.NET developer who’s already familiar with Visual Studio and ASP.NET development, the WebMatrix IDE doesn’t give you anything that you want. The tools provided are minimal and provide nothing that you can’t get in Visual Studio today, except the minimal Razor syntax highlighting, so there’s little need to take a step back. With Visual Studio integration coming later there’s little reason to look at WebMatrix for tooling. It’s good to see that Microsoft is giving some thought about the ease of use of .NET as a platform For so many years, we’ve been piling on more and more new features without trying to take a step back and see how complicated the development/configuration/deployment process has become. Sometimes it’s good to take a step - or several steps - back and take another look and realize just how far we’ve come. WebMatrix is one of those reminders and one that likely will result in some positive changes on the platform as a whole. © Rick Strahl, West Wind Technologies, 2005-2010Posted in ASP.NET   IIS7  

    Read the article

  • 'pip install carbon' looks like it works, but pip disagrees afterward

    - by fennec
    I'm trying to use pip to install the package carbon, a package related to statistics collection. When I run pip install carbon, it looks like everything works. However, pip is unconvinced that the package is actually installed. (This ultimately causes trouble because I'm using Puppet, and have a rule to install carbon using pip, and when puppet asks pip "is this package installed?" it says "no" and it reinstalls it again.) How do I figure out what's preventing pip from recognizing the success of this installation? Here is the output of the regular install: root@statsd:/opt/graphite# pip install carbon Downloading/unpacking carbon Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 Successfully installed carbon Cleaning up... root@statsd:/opt/graphite# pip freeze | grep carbon root@statsd: Here is the verbose version of the install: root@statsd:/opt/graphite# pip install carbon -v Downloading/unpacking carbon Using version 0.9.9 (newest of versions: 0.9.9, 0.9.9, 0.9.8, 0.9.7, 0.9.6, 0.9.5) Downloading carbon-0.9.9.tar.gz Running setup.py egg_info for package carbon running egg_info creating pip-egg-info/carbon.egg-info writing requirements to pip-egg-info/carbon.egg-info/requires.txt writing pip-egg-info/carbon.egg-info/PKG-INFO writing top-level names to pip-egg-info/carbon.egg-info/top_level.txt writing dependency_links to pip-egg-info/carbon.egg-info/dependency_links.txt writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) reading manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' writing manifest file 'pip-egg-info/carbon.egg-info/SOURCES.txt' Requirement already satisfied (use --upgrade to upgrade): twisted in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): txamqp in /usr/local/lib/python2.7/dist-packages (from carbon) Requirement already satisfied (use --upgrade to upgrade): zope.interface in /usr/local/lib/python2.7/dist-packages (from twisted->carbon) Requirement already satisfied (use --upgrade to upgrade): distribute in /usr/local/lib/python2.7/dist-packages (from zope.interface->twisted->carbon) Installing collected packages: carbon Running setup.py install for carbon running install running build running build_py creating build creating build/lib.linux-i686-2.7 creating build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_publisher.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/manhole.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/instrumentation.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/cache.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/management.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/relayrules.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/events.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/protocols.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/conf.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/rewrite.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/hashing.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/writer.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/client.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/util.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/service.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/amqp_listener.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/routers.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/storage.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/log.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/__init__.py -> build/lib.linux-i686-2.7/carbon copying lib/carbon/state.py -> build/lib.linux-i686-2.7/carbon creating build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/receiver.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/rules.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/buffers.py -> build/lib.linux-i686-2.7/carbon/aggregator copying lib/carbon/aggregator/__init__.py -> build/lib.linux-i686-2.7/carbon/aggregator package init file 'lib/twisted/plugins/__init__.py' not found (or not a regular file) creating build/lib.linux-i686-2.7/twisted creating build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_relay_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_aggregator_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/twisted/plugins/carbon_cache_plugin.py -> build/lib.linux-i686-2.7/twisted/plugins copying lib/carbon/amqp0-8.xml -> build/lib.linux-i686-2.7/carbon running build_scripts creating build/scripts-2.7 copying and adjusting bin/validate-storage-schemas.py -> build/scripts-2.7 copying and adjusting bin/carbon-aggregator.py -> build/scripts-2.7 copying and adjusting bin/carbon-cache.py -> build/scripts-2.7 copying and adjusting bin/carbon-relay.py -> build/scripts-2.7 copying and adjusting bin/carbon-client.py -> build/scripts-2.7 changing mode of build/scripts-2.7/validate-storage-schemas.py from 664 to 775 changing mode of build/scripts-2.7/carbon-aggregator.py from 664 to 775 changing mode of build/scripts-2.7/carbon-cache.py from 664 to 775 changing mode of build/scripts-2.7/carbon-relay.py from 664 to 775 changing mode of build/scripts-2.7/carbon-client.py from 664 to 775 running install_lib copying build/lib.linux-i686-2.7/carbon/amqp_publisher.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/manhole.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp0-8.xml -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/instrumentation.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/cache.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/management.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/relayrules.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/events.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/protocols.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/conf.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/rewrite.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/hashing.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/writer.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/client.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/util.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/aggregator/receiver.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/rules.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/buffers.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/aggregator/__init__.py -> /opt/graphite/lib/carbon/aggregator copying build/lib.linux-i686-2.7/carbon/service.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/amqp_listener.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/routers.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/storage.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/log.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/__init__.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/carbon/state.py -> /opt/graphite/lib/carbon copying build/lib.linux-i686-2.7/twisted/plugins/carbon_relay_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_aggregator_plugin.py -> /opt/graphite/lib/twisted/plugins copying build/lib.linux-i686-2.7/twisted/plugins/carbon_cache_plugin.py -> /opt/graphite/lib/twisted/plugins byte-compiling /opt/graphite/lib/carbon/amqp_publisher.py to amqp_publisher.pyc byte-compiling /opt/graphite/lib/carbon/manhole.py to manhole.pyc byte-compiling /opt/graphite/lib/carbon/instrumentation.py to instrumentation.pyc byte-compiling /opt/graphite/lib/carbon/cache.py to cache.pyc byte-compiling /opt/graphite/lib/carbon/management.py to management.pyc byte-compiling /opt/graphite/lib/carbon/relayrules.py to relayrules.pyc byte-compiling /opt/graphite/lib/carbon/events.py to events.pyc byte-compiling /opt/graphite/lib/carbon/protocols.py to protocols.pyc byte-compiling /opt/graphite/lib/carbon/conf.py to conf.pyc byte-compiling /opt/graphite/lib/carbon/rewrite.py to rewrite.pyc byte-compiling /opt/graphite/lib/carbon/hashing.py to hashing.pyc byte-compiling /opt/graphite/lib/carbon/writer.py to writer.pyc byte-compiling /opt/graphite/lib/carbon/client.py to client.pyc byte-compiling /opt/graphite/lib/carbon/util.py to util.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/receiver.py to receiver.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/rules.py to rules.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/buffers.py to buffers.pyc byte-compiling /opt/graphite/lib/carbon/aggregator/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/service.py to service.pyc byte-compiling /opt/graphite/lib/carbon/amqp_listener.py to amqp_listener.pyc byte-compiling /opt/graphite/lib/carbon/routers.py to routers.pyc byte-compiling /opt/graphite/lib/carbon/storage.py to storage.pyc byte-compiling /opt/graphite/lib/carbon/log.py to log.pyc byte-compiling /opt/graphite/lib/carbon/__init__.py to __init__.pyc byte-compiling /opt/graphite/lib/carbon/state.py to state.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_relay_plugin.py to carbon_relay_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_aggregator_plugin.py to carbon_aggregator_plugin.pyc byte-compiling /opt/graphite/lib/twisted/plugins/carbon_cache_plugin.py to carbon_cache_plugin.pyc running install_data copying conf/storage-schemas.conf.example -> /opt/graphite/conf copying conf/rewrite-rules.conf.example -> /opt/graphite/conf copying conf/relay-rules.conf.example -> /opt/graphite/conf copying conf/carbon.amqp.conf.example -> /opt/graphite/conf copying conf/aggregation-rules.conf.example -> /opt/graphite/conf copying conf/carbon.conf.example -> /opt/graphite/conf running install_egg_info running egg_info creating lib/carbon.egg-info writing requirements to lib/carbon.egg-info/requires.txt writing lib/carbon.egg-info/PKG-INFO writing top-level names to lib/carbon.egg-info/top_level.txt writing dependency_links to lib/carbon.egg-info/dependency_links.txt writing manifest file 'lib/carbon.egg-info/SOURCES.txt' warning: manifest_maker: standard file '-c' not found reading manifest file 'lib/carbon.egg-info/SOURCES.txt' writing manifest file 'lib/carbon.egg-info/SOURCES.txt' removing '/opt/graphite/lib/carbon-0.9.9-py2.7.egg-info' (and everything under it) Copying lib/carbon.egg-info to /opt/graphite/lib/carbon-0.9.9-py2.7.egg-info running install_scripts copying build/scripts-2.7/validate-storage-schemas.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-aggregator.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-cache.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-relay.py -> /opt/graphite/bin copying build/scripts-2.7/carbon-client.py -> /opt/graphite/bin changing mode of /opt/graphite/bin/validate-storage-schemas.py to 775 changing mode of /opt/graphite/bin/carbon-aggregator.py to 775 changing mode of /opt/graphite/bin/carbon-cache.py to 775 changing mode of /opt/graphite/bin/carbon-relay.py to 775 changing mode of /opt/graphite/bin/carbon-client.py to 775 writing list of installed files to '/tmp/pip-9LuJTF-record/install-record.txt' Successfully installed carbon Cleaning up... Removing temporary dir /opt/graphite/build... root@statsd:/opt/graphite# For reference, this is pip 1.0 from /usr/lib/python2.7/dist-packages (python 2.7)

    Read the article

  • Is SugarCRM really adequate for custom development (or adequate at all)? [closed]

    - by dukeofgaming
    Have you used SugarCRM for custom development successfully?, if so, have you done it programmatically or through the Module Builder? Were you successful? If not, why? I used SugarCRM for a project about two years ago, I ran into errors from the very installation, having to hack the actual installation file to deploy the software in the server and other erros that I can't recall now. Two years after, I'm picking it up for a project once again. I'm feeling like I should have developed the whole thing from scratch myself. Some examples: I couldn't install it in the server (again). I had to install it locally, then copy the files and database over to the server and manually edit the config file. Constantly getting deployment errors from the module builder. One reason is SugarCRM keeps creating a record in the upgrade_history table for a file that does not exist, I keep deleting such record and it keeps coming back corrupt. I get other deployment errors, but have not figured them out. then I have to rollback all files and database to try again. I deleted a custom module with relationships, the relationships stayed in the other modules and cannot be deleted anymore, PHP warnings all over the place. Quick create for custom modules does not appear, hack needed. Its whole cache directory is a joke, permanent data/files are stored there. The module builder interface disappears required fields. Edit the wrong thing, module builder won't deploy again, then pray Quick Repair and/or Rebuild Relationships do the trick. My impression of SugarCRM now is that, regardless of its pretty exterior and apparent functionality, it is a very low quality piece of software. This even scared me more: http://amplicate.com/hate/sugarcrm; a quote: I wis this info had been available when I tried to implement it 2 years ago... I searched high and low and the only info I found was positive. Yes, it's a piece of crap. The community edition was full of bugs... nothing worked. Essentially I got fired for implementing it. I'm glad though, because now I work for myself, am much happier and make more money... so, I should really thank SugarCRM for sucking so much I guess! I figured that perhaps some of you have had similar experiences, and have either sticked with SugarCRM or moved on to another solution. I'm very interested in knowing what your resolutions were -or your current situations are- to make up my own mind, since the project I'm working on is long term and I'm feeling SugarCRM will be more an obstacle than an aid. After further failed attempts to continue using this software I continued to stumble upon dead-ends when using the module editor, I could only recover from this errors by using version control. We are now moving on to a custom implementation using Symfony; perhaps if we were using it with its out-of-the-box modules we would have sticked with it.

    Read the article

  • 3 Incredibly Useful Projects to jump-start your Kinect Development.

    - by mbcrump
    I’ve been playing with the Kinect SDK Beta for the past few days and have noticed a few projects on CodePlex worth checking out. I decided to blog about them to help spread awareness. If you want to learn more about Kinect SDK then you check out my”Busy Developer’s Guide to the Kinect SDK Beta”. Let’s get started:   KinectContrib is a set of VS2010 Templates that will help you get started building a Kinect project very quickly. Once you have it installed you will have the option to select the following Templates: KinectDepth KinectSkeleton KinectVideo Please note that KinectContrib requires the Kinect for Windows SDK beta to be installed. Kinect Templates after installing the Template Pack. The reference to Microsoft.Research.Kinect is added automatically.  Here is a sample of the code for the MainWindow.xaml in the “Video” template: <Window x:Class="KinectVideoApplication1.MainWindow" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" Title="MainWindow" Height="480" Width="640"> <Grid> <Image Name="videoImage"/> </Grid> </Window> and MainWindow.xaml.cs using System; using System.Windows; using System.Windows.Media; using System.Windows.Media.Imaging; using Microsoft.Research.Kinect.Nui; namespace KinectVideoApplication1 { public partial class MainWindow : Window { //Instantiate the Kinect runtime. Required to initialize the device. //IMPORTANT NOTE: You can pass the device ID here, in case more than one Kinect device is connected. Runtime runtime = new Runtime(); public MainWindow() { InitializeComponent(); //Runtime initialization is handled when the window is opened. When the window //is closed, the runtime MUST be unitialized. this.Loaded += new RoutedEventHandler(MainWindow_Loaded); this.Unloaded += new RoutedEventHandler(MainWindow_Unloaded); //Handle the content obtained from the video camera, once received. runtime.VideoFrameReady += new EventHandler<Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs>(runtime_VideoFrameReady); } void MainWindow_Unloaded(object sender, RoutedEventArgs e) { runtime.Uninitialize(); } void MainWindow_Loaded(object sender, RoutedEventArgs e) { //Since only a color video stream is needed, RuntimeOptions.UseColor is used. runtime.Initialize(Microsoft.Research.Kinect.Nui.RuntimeOptions.UseColor); //You can adjust the resolution here. runtime.VideoStream.Open(ImageStreamType.Video, 2, ImageResolution.Resolution640x480, ImageType.Color); } void runtime_VideoFrameReady(object sender, Microsoft.Research.Kinect.Nui.ImageFrameReadyEventArgs e) { PlanarImage image = e.ImageFrame.Image; BitmapSource source = BitmapSource.Create(image.Width, image.Height, 96, 96, PixelFormats.Bgr32, null, image.Bits, image.Width * image.BytesPerPixel); videoImage.Source = source; } } } You will find this template pack is very handy especially for those new to Kinect Development.   Next up is The Coding4Fun Kinect Toolkit which contains extension methods and a WPF control to help you develop with the Kinect SDK. After downloading the package simply add a reference to the .dll using either the WPF or WinForms version. Now you will have access to several methods that can help you save an image: (for example) For a full list of extension methods and properties, please visit the site at http://c4fkinect.codeplex.com/. Kinductor – This is a great application for just learning how to use the Kinect SDK. The project uses MVVM Light and is a great start for those looking how to structure their first Kinect Application. Conclusion: Things are already getting easier for those working with the Kinect SDK. I imagine that after a few more months we will see the SDK go out of beta and allow commercial applications to run using it. I am very excited and hope that you continue reading my blog for more Kinect, WPF and Silverlight news.  Subscribe to my feed

    Read the article

  • CI tests to enforce specific development rules - good practice?

    - by KeithS
    The following is all purely hypothetical and any particular portion of it may or may not accurately describe real persons or situations, whether living, dead or just pretending. Let's say I'm a senior dev or architect in charge of a dev team working on a project. This project includes a security library for user authentication/authorization of the application under development. The library must be available for developers to edit; however, I wish to "trust but verify" that coders are not doing things that could compromise the security of the finished system, and because this isn't my only responsibility I want it to be done in an automated way. As one example, let's say I have an interface that represents a user which has been authenticated by the system's security library. The interface exposes basic user info and a list of things the user is authorized to do (so that the client app doesn't have to keep asking the server "can I do this?"), all in an immutable fashion of course. There is only one implementation of this interface in production code, and for the purposes of this post we can say that all appropriate measures have been taken to ensure that this implementation can only be used by the one part of our code that needs to be able to create concretions of the interface. The coders have been instructed that this interface and its implementation are sacrosanct and any changes must go through me. However, those are just words; the security library's source is open for editing by necessity. Any of my devs could decide that this secured, private, hash-checked implementation needs to be public so that they could do X, or alternately they could create their own implementation of this public interface in a different library, exposing the hashing algorithm that provides the secure checksum, in order to do Y. I may not be made aware of these changes so that I can beat the developer over the head for it. An attacker could then find these little nuggets in an unobfuscated library of the compiled product, and exploit it to provide fake users and/or falsely-elevated administrative permissions, bypassing the entire security system. This possibility keeps me awake for a couple of nights, and then I create an automated test that reflectively checks the codebase for types deriving from the interface, and fails if it finds any that are not exactly what and where I expect them to be. I compile this test into a project under a separate folder of the VCS that only I have rights to commit to, have CI compile it as an external library of the main project, and set it up to run as part of the CI test suite for user commits. Now, I have an automated test under my complete control that will tell me (and everyone else) if the number of implementations increases without my involvement, or an implementation that I did know about has anything new added or has its modifiers or those of its members changed. I can then investigate further, and regain the opportunity to beat developers over the head as necessary. Is this considered "reasonable" to want to do in situations like this? Am I going to be seen in a negative light for going behind my devs' backs to ensure they aren't doing something they shouldn't?

    Read the article

  • Setup VPN issue on Ubuntu Server 12.04

    - by Yozone W.
    I have a problem with setup VPN server on my Ubuntu VPS, here is my server environments: Ubuntu Server 12.04 x86_64 xl2tpd 1.3.1+dfsg-1 pppd 2.4.5-5ubuntu1 openswan 1:2.6.38-1~precise1 After install software and configuration: ipsec verify Checking your system to see if IPsec got installed and started correctly: Version check and ipsec on-path [OK] Linux Openswan U2.6.38/K3.2.0-24-virtual (netkey) Checking for IPsec support in kernel [OK] SAref kernel support [N/A] NETKEY: Testing XFRM related proc values [OK] [OK] [OK] Checking that pluto is running [OK] Pluto listening for IKE on udp 500 [OK] Pluto listening for NAT-T on udp 4500 [OK] Checking for 'ip' command [OK] Checking /bin/sh is not /bin/dash [WARNING] Checking for 'iptables' command [OK] Opportunistic Encryption Support [DISABLED] /var/log/auth.log message: Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [RFC 3947] method set to=115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] meth=114, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-08] meth=113, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-07] meth=112, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-06] meth=111, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-05] meth=110, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-04] meth=109, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 115 Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: ignoring Vendor ID payload [FRAGMENTATION 80000000] Oct 16 06:50:54 vpn pluto[3963]: packet from [My IP Address]:2251: received Vendor ID payload [Dead Peer Detection] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: responding to Main Mode from unknown peer [My IP Address] Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Oct 16 06:50:54 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R1: sent MR1, expecting MI2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: STATE_MAIN_R2: sent MR2, expecting MI3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: Main mode peer ID is ID_IPV4_ADDR: '192.168.12.52' Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[5] [My IP Address] #5: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: new NAT mapping for #5, was [My IP Address]:2251, now [My IP Address]:2847 Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=aes_256 prf=oakley_sha group=modp1024} Oct 16 06:50:55 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: the peer proposed: [My Server IP Address]/32:17/1701 -> 192.168.12.52/32:17/0 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: NAT-Traversal: received 2 NAT-OA. using first, ignoring others Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: responding to Quick Mode proposal {msgid:8579b1fb} Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: us: [My Server IP Address]<[My Server IP Address]>:17/1701 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: them: [My IP Address][192.168.12.52]:17/65280===192.168.12.52/32 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: Dead Peer Detection (RFC 3706): enabled Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Oct 16 06:50:56 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #6: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x08bda158 <0x4920a374 xfrm=AES_256-HMAC_SHA1 NATOA=192.168.12.52 NATD=[My IP Address]:2847 DPD=enabled} Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA(0x08bda158) payload: deleting IPSEC State #6 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: ERROR: netlink XFRM_MSG_DELPOLICY response for flow eroute_connection delete included errno 2: No such file or directory Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received and ignored informational message Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address] #5: received Delete SA payload: deleting ISAKMP State #5 Oct 16 06:51:16 vpn pluto[3963]: "L2TP-PSK-NAT"[6] [My IP Address]: deleting connection "L2TP-PSK-NAT" instance with peer [My IP Address] {isakmp=#0/ipsec=#0} Oct 16 06:51:16 vpn pluto[3963]: packet from [My IP Address]:2847: received and ignored informational message xl2tpd -D message: xl2tpd[4289]: Enabling IPsec SAref processing for L2TP transport mode SAs xl2tpd[4289]: IPsec SAref does not work with L2TP kernel mode yet, enabling forceuserspace=yes xl2tpd[4289]: setsockopt recvref[30]: Protocol not available xl2tpd[4289]: This binary does not support kernel L2TP. xl2tpd[4289]: xl2tpd version xl2tpd-1.3.1 started on vpn.netools.me PID:4289 xl2tpd[4289]: Written by Mark Spencer, Copyright (C) 1998, Adtran, Inc. xl2tpd[4289]: Forked by Scott Balmos and David Stipp, (C) 2001 xl2tpd[4289]: Inherited by Jeff McAdams, (C) 2002 xl2tpd[4289]: Forked again by Xelerance (www.xelerance.com) (C) 2006 xl2tpd[4289]: Listening on IP address [My Server IP Address], port 1701 Then it just stopped here, and have no any response. I can't connect VPN on my mac client, the /var/log/system.log message: Oct 16 15:17:36 azone-iMac.local configd[17]: SCNC: start, triggered by SystemUIServer, type L2TP, status 0 Oct 16 15:17:36 azone-iMac.local pppd[3799]: pppd 2.4.2 (Apple version 596.13) started by azone, uid 501 Oct 16 15:17:38 azone-iMac.local pppd[3799]: L2TP connecting to server 'vpn.netools.me' ([My Server IP Address])... Oct 16 15:17:38 azone-iMac.local pppd[3799]: IPSec connection started Oct 16 15:17:38 azone-iMac.local racoon[359]: Connecting. Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 started (Initiated by me). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Oct 16 15:17:38 azone-iMac.local racoon[359]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Oct 16 15:17:38 azone-iMac.local racoon[359]: IPSec Phase1 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 started (Initiated by me). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Oct 16 15:17:39 azone-iMac.local racoon[359]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Oct 16 15:17:39 azone-iMac.local racoon[359]: IPSec Phase2 established (Initiated by me). Oct 16 15:17:39 azone-iMac.local pppd[3799]: IPSec connection established Oct 16 15:17:59 azone-iMac.local pppd[3799]: L2TP cannot connect to the server Oct 16 15:17:59 azone-iMac.local racoon[359]: IPSec disconnecting from server [My Server IP Address] Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKE Packet: transmit success. (Information message). Oct 16 15:17:59 azone-iMac.local racoon[359]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Anyone help? Thanks a million!

    Read the article

  • Multiple vlans access to shared pbx system

    - by Matt
    I'm new to networking and was looking for some assistance. First off I'm using packet tracer to diagram my scenario as I will be receiving my equipment next week to deploy. Hardware to be used: 2 catalyst 3560 switches all connect to a sonic wall router I have two companies that work in the same office space. I need to keep these companies separate on their own vlan. They will however need to share the phone system. (Packet tracer file uploaded to give those who have the time to see what I put together.) http://dl.dropbox.com/u/86234623/network%20build.pkt Here is my current test scenario: on switch 0 I have: company A on vlan 2 computers 172.16.1.100 and 101 255.255.0.0 FA0/10 FA0/11 company B on vlan 3 computers 172.16.2.102, 255.255.0.0 FA0/12 PBX on a trunk port 172.16.0.5, 255.255.0.0 FA0/5 trunk port on FA0/1 to connect the switches on switch 1 I have: company A on vlan 2 computers 172.16.1.102, 255.255.0.0 company B on vlan 3 computers 172.16.2.100 and 101, 255.255.0.0 trunk port on FA0/1 to connect the switches I can ping the respective computers on the same vlan but cant ping company A to B which is what I want. However neither company can talk (ping) the PBX. Here are the commands I used to configure what I have: switch 0 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 2 int fa0/11 switchport mode access switchport access vlan 2 int fa0/12 switchport mode access switchport access vlan 3 int fa0/5 switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3 Switch 1 en conf t vlan 2 name A vlan 3 name B int fa0/10 switchport mode access switchport access vlan 3 int fa0/11 switchport mode access switchport access vlan 3 int fa0/12 switchport mode access switchport access vlan 2 int fa0/1 (to connect the switches) switchport trunk encapsulation dot1q switchport mode trunk switchport trunk allowed vlan 1-3

    Read the article

  • IPSec VPN using ZyWALL IPSec VPN Client: unable to connect from some providers

    - by Reshi
    I'm trying to configure an IPSec VPN to one company from my home. The company has SANET internet service provider. I was able to create a VPN connection from another company that has the same internet service provider. The problem begins when I'm trying to connect from another ISP like Orange or Telekom. Here is the log from ZyWall: 20120816 10:06:18:359 Default (SA Gateway-P1) SEND phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:375 Default (SA Gateway-P1) RECV phase 1 Main Mode [SA] [VID] [VID] [VID] [VID] [VID] [VID] [VID] [VID] 20120816 10:06:18:390 Default (SA Gateway-P1) SEND phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:718 Default (SA Gateway-P1) RECV phase 1 Main Mode [KEY_EXCH] [NONCE] [NAT_D] [NAT_D] 20120816 10:06:18:734 Default (SA Gateway-P1) SEND phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default (SA Gateway-P1) RECV phase 1 Main Mode [HASH] [ID] 20120816 10:06:18:750 Default phase 1 done: initiator id [email protected], responder id 111.112.113.114 20120816 10:06:18:765 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) RECV phase 2 Quick Mode [HASH] [SA] [KEY_EXCH] [NONCE] [ID] [ID] 20120816 10:06:18:953 Default (SA Gateway-Tunnel-P2) SEND phase 2 Quick Mode [HASH] 20120816 10:06:48:968 Default (SA Gateway-P1) SEND Informational [HASH] [NOTIFY] type DPD_R_U_THERE 20120816 10:06:48:984 Default (SA Gateway-P1) RECV Informational [HASH] [NOTIFY] type DPD_R_U_THERE_ACK ZyWall informs me that the tunnel was opened. But I can't ping or access any computer in the network. My configuration at home: ISP: Orange Optical connection Terminal: GPON OPTICAL NETWORK TERMINAL G-25E Router: TPLink TL-WR941N --> SPI Firewall Enabled --> VPN - IPSEC Passthrough Enabled I was wondering if the problem could not be on ISP side (that he blocks somehow this connection because in SANET ISP it worked fine) or even in my terminal or router. What could I check? Where could be the problem ?

    Read the article

  • How can I tell the size of my app during development?

    - by Newbyman
    My programming decissions are directly related to how much room I have left, or worse perhaps how much I need to shave off in order to get up the 10mb limit. I have read that Apple has quietly increased the 3G & Edge download limit from 10mb up to 20mb in preparation for the iPad in April. Either way, my real question is how can I gauge a rough estimate of how large my app will end while I'm still in the development phase? Is the file size of my development folder roughly 1 to 1 ratio? Is the compressed file size of my development a better approximation? My .xcodeproj file is only a couple hundred kB, but the size of my folder is 11.8 MB. I have a .sqlite database, less than 20 small png images and a Settings.Bundle. The rest are unknown Xcode files related to build, build for iphoneOS, simulator etc.... My source code is rather large with around 1000 lines in most of the major controllers, all in all around 48 .h&.m files. But my classes folder inside my development folder is less than 800kb. Digging around inside my Build file, there is lots of iphone simulator files and debugging files which I don't think will contribute to the final product. The Application file states that it is around 2.3 MB. However, this is such a large difference from the 11.8 MB, I have to wonder if this is just another piece of the equation. I have the app on the my device, I'm in the testing phase. Therefore, I though that I would try to see how large the working version was on the device by checking in iTunes, however my development app is visible on the right-hand the application's iphone screen, but no information about the app most importantly its size. I also checked in Organizer, I used the lower portion of the screen-(Applications), found my application and selected the drop down arrow which gave my "Application Data" and a download arrow button to the right to save a file on my desktop, named with the unique AppleID. Inside the folder it had three folders-(documents, library, tmp) the documents had a copy of my .sqlite database, the library a few more files but not anything obvious or of size, and the tmp was empty. All in all the entire folder was only 164kb-which tells me that this is not the right place to find the size either. I understand that the size is considered to be the size of my binary plus all the additional files and images that I have add. Does anyone have a effective way of guaging how large the binary is or the relating the development folder size to what the final App Store application size will end up. I know that questions have been posted with similar aspects, but I could not find any answered post that really described...what files, or how to determine size specifically. I know that this question looks like a book, but I just wanted to be specific in conveying exactly what I'm looking for and the attempts thus far. *Note all files are unzipped and still in regular working Xcode order of a single app with no brought-in builds or referenced projects. I'm sure that this is straight forward, I just don't know where to look?

    Read the article

  • Can't get bonding and bridging to work for KVM

    - by user9546
    Hi everyone. I can't for the life of me get bonding and bridging to work for the KVM setup I'm building. I'm using a fresh install (not an upgrade) of Ubuntu Server 10.10. I have 4 NICs on the same subnet (two intended for each of my two VMs). I'm trying to achieve the setup that Uthark describes here. But following his guidelines didn't work for me. My eth0 and eth1 did not come up, and "brctl show" showed that br0 didn't have any interfaces (the bond). I assumed it didn't work because he's using 10.4, and this article says there's a recent change in bonding: [I can't post more than one hyperlink per post because I'm a newbie.] I had to use this article to get my interfaces to work at all on the same subnet, which is why I have the post-up lines on some of my interfaces: [I can't post more than one hyperlink per post because I'm a newbie.] I installed ifenslave and ethtool. I also created /etc/modprobe.d/aliases.conf with the following content: alias bond0 bonding options bonding mode=6 miimon=100 downdelay=200 updelay=200 And I included "bonding" in /etc/modules So, after several approaches, here is my latest interfaces file: auto lo iface lo inet loopback auto eth5 iface eth5 inet manual auto br5 iface br5 inet static post-up /sbin/ip rule add from [network].79 lookup 10 post-up /sbin/ip route add table 10 default via [network].1 src [network].79 dev br5 address [network].79 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth5 bridge_stp off bridge_fd 0 bridge_maxwait 0 auto eth2 iface eth2 inet manual auto br2 iface br2 inet static post-up /sbin/ip rule add from [network].78 lookup 11 post-up /sbin/ip route add table 11 default via [network].1 src [network].78 dev br2 address [network].78 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports eth2 bridge_stp off bridge_fd 0 bridge_maxwait 0 iface eth0 inet manual iface eth1 inet manual auto bond0 iface bond0 inet static bond_miimon 100 bond_mode balance-alb up /sbin/ifenslave bond0 eth0 eth1 down /sbin/ifenslave -d bond0 eth0 eth1 auto br0 iface br0 inet static address [network].60 netmask 255.255.255.0 network [network].0 broadcast [network].255 gateway [network].1 bridge_ports bond0 eth2, eth5, br2, and br5 all seem to be working fine. The only other thing I could find that looked suspicious is an error regarding bonding in /var/log/messages: kernel: [ 3.828684] bonding: Warning: either miimon or arp_interval and arp_ip_target module parameters must be specified, otherwise bonding will not detect link failures! see bonding.txt for details. even though there is a bond-miimon line in /etc/network/interfaces (if that's what they're talking about). Also, the bond seems to go in and out of promiscuous mode several times on boot: Jan 20 14:19:02 kvmhost kernel: [ 3.902378] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902390] device bond0 left promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902393] device bond0 entered promiscuous mode Jan 20 14:19:02 kvmhost kernel: [ 3.902397] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.998990] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999005] device bond0 left promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999008] device bond0 entered promiscuous mode Jan 20 14:19:03 kvmhost kernel: [ 4.999012] device bond0 left promiscuous mode Any advice would be greatly appreciated. It seems that this must be possible, based on other posts, but I can't see what I'm doing wrong. Thanks.

    Read the article

  • setup L2TP on Ubuntu 10.10

    - by luca
    I'm following this guide to setup a VPS on my Ubuntu VPS: http://riobard.com/blog/2010-04-30-l2tp-over-ipsec-ubuntu/ My config files are setup as in that guide, openswan version is 2.6.26 I think.. It doesn't work, I can show you my auth.log (on the VPS): Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [RFC 3947] method set to=109 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike] method set to=110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [8f8d83826d246b6fc7a8a6a428c11de8] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [439b59f8ba676c4c7737ae22eab8f582] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [4d1e0e136deafa34c4f3ea9f02ec7285] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [80d0bb3def54565ee84645d4c85ce3ee] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: ignoring unknown Vendor ID payload [9909b64eed937c6573de52ace952fa6b] Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-03] meth=108, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02] meth=107, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [draft-ietf-ipsec-nat-t-ike-02_n] meth=106, but already using method 110 Feb 18 06:11:07 maverick pluto[6909]: packet from 93.36.127.12:500: received Vendor ID payload [Dead Peer Detection] Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: responding to Main Mode from unknown peer 93.36.127.12 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: transition from state STATE_MAIN_R0 to state STATE_MAIN_R1 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: STATE_MAIN_R1: sent MR1, expecting MI2 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: NAT-Traversal: Result using draft-ietf-ipsec-nat-t-ike (MacOS X): peer is NATed Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: transition from state STATE_MAIN_R1 to state STATE_MAIN_R2 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: STATE_MAIN_R2: sent MR2, expecting MI3 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: Main mode peer ID is ID_IPV4_ADDR: '10.0.1.8' Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[7] 93.36.127.12 #7: switched from "L2TP-PSK-NAT" to "L2TP-PSK-NAT" Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: deleting connection "L2TP-PSK-NAT" instance with peer 93.36.127.12 {isakmp=#0/ipsec=#0} Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: transition from state STATE_MAIN_R2 to state STATE_MAIN_R3 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: new NAT mapping for #7, was 93.36.127.12:500, now 93.36.127.12:36810 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: STATE_MAIN_R3: sent MR3, ISAKMP SA established {auth=OAKLEY_PRESHARED_KEY cipher=oakley_3des_cbc_192 prf=oakley_sha group=modp1024} Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: ignoring informational payload, type IPSEC_INITIAL_CONTACT msgid=00000000 Feb 18 06:11:07 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received and ignored informational message Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: the peer proposed: 69.147.233.173/32:17/1701 -> 10.0.1.8/32:17/0 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: responding to Quick Mode proposal {msgid:183463cf} Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: us: 69.147.233.173<69.147.233.173>[+S=C]:17/1701 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: them: 93.36.127.12[10.0.1.8,+S=C]:17/64111===10.0.1.8/32 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: transition from state STATE_QUICK_R0 to state STATE_QUICK_R1 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: STATE_QUICK_R1: sent QR1, inbound IPsec SA installed, expecting QI2 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: transition from state STATE_QUICK_R1 to state STATE_QUICK_R2 Feb 18 06:11:08 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #8: STATE_QUICK_R2: IPsec SA established transport mode {ESP=>0x0b1cf725 <0x0b719671 xfrm=AES_128-HMAC_SHA1 NATOA=none NATD=93.36.127.12:36810 DPD=none} Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received Delete SA(0x0b1cf725) payload: deleting IPSEC State #8 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: netlink recvfrom() of response to our XFRM_MSG_DELPOLICY message for policy eroute_connection delete was too long: 100 > 36 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: netlink recvfrom() of response to our XFRM_MSG_DELPOLICY message for policy [email protected] was too long: 168 > 36 Feb 18 06:11:28 maverick pluto[6909]: | raw_eroute result=0 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received and ignored informational message Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12 #7: received Delete SA payload: deleting ISAKMP State #7 Feb 18 06:11:28 maverick pluto[6909]: "L2TP-PSK-NAT"[8] 93.36.127.12: deleting connection "L2TP-PSK-NAT" instance with peer 93.36.127.12 {isakmp=#0/ipsec=#0} Feb 18 06:11:28 maverick pluto[6909]: packet from 93.36.127.12:36810: received and ignored informational message and my system log on OSX (from where I'm connecting): Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: pppd 2.4.2 (Apple version 412.3) started by luca, uid 501 Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: L2TP connecting to server '69.147.233.173' (69.147.233.173)... Feb 18 13:11:09 luca-ciorias-MacBook-Pro pppd[68656]: IPSec connection started Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: Connecting. Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 1). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 2). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 3). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 4). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Main-Mode message 5). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase1 AUTH: success. (Initiator, Main-Mode Message 6). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Main-Mode message 6). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase1 Initiator: success. (Initiator, Main-Mode). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:09 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (ISAKMP-SA). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Quick-Mode message 1). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: receive success. (Initiator, Quick-Mode message 2). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Initiator, Quick-Mode message 3). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Phase2 Initiator: success. (Initiator, Quick-Mode). Feb 18 13:11:10 luca-ciorias-MacBook-Pro racoon[68453]: Connected. Feb 18 13:11:10 luca-ciorias-MacBook-Pro pppd[68656]: IPSec connection established Feb 18 13:11:30 luca-ciorias-MacBook-Pro pppd[68656]: L2TP cannot connect to the server Feb 18 13:11:30 luca-ciorias-MacBook-Pro configd[20]: SCNCController: Disconnecting. (Connection tried to negotiate for, 22 seconds). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (Delete IPSEC-SA). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKE Packet: transmit success. (Information message). Feb 18 13:11:30 luca-ciorias-MacBook-Pro racoon[68453]: IKEv1 Information-Notice: transmit success. (Delete ISAKMP-SA). Feb 18 13:11:31 luca-ciorias-MacBook-Pro racoon[68453]: Disconnecting. (Connection was up for, 20.157953 seconds).

    Read the article

  • Load balancing with multiple gateways

    - by ttouch
    I have to different ISPs, each on each own network. The main connects via ethernet and the secondary via wifi. The two networks have no relation at all. I just connect to them simultaneously. The reason I want to load balance between them is to achieve higher Internet speeds. Note: I have no advanced network hardware. Just my pc and the two routers that I have no access... main network: if: eth0 gw: 192.168.178.1 my ip: 192.168.178.95 speed: 400 kbit/s secondary network: if: wlan0 gw: 192.168.1.1 my ip: 192.168.1.95 speed: 300 kbit/s A diagram to explain the situation: http://i.imgur.com/NZdsv.jpg I'm on Arch Linux x64. I use netcfg to configure the interfaces Configs: # /etc/network.d/main CONNECTION='ethernet' DESCRIPTION='A basic static ethernet connection using iproute' INTERFACE='eth0' IP='static' ADDR='192.168.178.95' # /etc/network.d/second CONNECTION='wireless' DESCRIPTION='A simple WEP encrypted wireless connection' INTERFACE='wlan0' SECURITY='wep' ESSID='wifi_essid' KEY='the_password' IP="static" ADDR='192.168.1.95' And I use iptables to load balance, rules: #!/bin/bash /usr/sbin/ip route flush table ISP1 2>/dev/null /usr/sbin/ip rule del fwmark 101 table ISP1 2>/dev/null /usr/sbin/ip route add table ISP1 192.168.178.0/24 dev eth0 proto kernel scope link src 192.168.178.95 metric 202 /usr/sbin/ip route add table ISP1 default via 192.168.178.1 dev eth0 /usr/sbin/ip rule add fwmark 101 table ISP1 /usr/sbin/ip route flush table ISP2 2>/dev/null /usr/sbin/ip rule del fwmark 102 table ISP2 2>/dev/null /usr/sbin/ip route add table ISP2 192.168.1.0/24 dev wlan0 proto kernel scope link src 192.168.1.95 metric 202 /usr/sbin/ip route add table ISP2 default via 192.168.1.1 dev wlan0 /usr/sbin/ip rule add fwmark 102 table ISP2 /usr/sbin/iptables -t mangle -F /usr/sbin/iptables -t mangle -X /usr/sbin/iptables -t mangle -N MARK-gw1 /usr/sbin/iptables -t mangle -A MARK-gw1 -m comment --comment 'send via 192.168.178.1' -j MARK --set-mark 101 /usr/sbin/iptables -t mangle -A MARK-gw1 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw1 -j RETURN /usr/sbin/iptables -t mangle -N MARK-gw2 /usr/sbin/iptables -t mangle -A MARK-gw2 -m comment --comment 'send via 192.168.1.1' -j MARK --set-mark 102 /usr/sbin/iptables -t mangle -A MARK-gw2 -j CONNMARK --save-mark /usr/sbin/iptables -t mangle -A MARK-gw2 -j RETURN /usr/sbin/iptables -t mangle -A PREROUTING -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment "this stream is already marked; escape early" -m mark ! --mark 0 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i eth0 -m conntrack --ctstate NEW -j MARK-gw1 /usr/sbin/iptables -t mangle -A PREROUTING -m comment --comment 'prevent asynchronous routing' -i wlan0 -m conntrack --ctstate NEW -j MARK-gw2 /usr/sbin/iptables -t mangle -N DEF_POL /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p tcp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'default balancing' -p udp -m conntrack --ctstate ESTABLISHED,RELATED -j CONNMARK --restore-mark /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 tcp' -p tcp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j MARK-gw1 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw1 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 0 -j ACCEPT /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j MARK-gw2 /usr/sbin/iptables -t mangle -A DEF_POL -m comment --comment 'balance gw2 udp' -p udp -m conntrack --ctstate NEW -m statistic --mode nth --every 2 --packet 1 -j ACCEPT /usr/sbin/iptables -t mangle -A PREROUTING -j DEF_POL /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound eth0' -o eth0 -s 192.168.0.0/16 -m mark --mark 101 -j SNAT --to-source 192.168.178.95 /usr/sbin/iptables -t nat -A POSTROUTING -m comment --comment 'snat outbound wlan0' -o wlan0 -s 192.168.0.0/16 -m mark --mark 102 -j SNAT --to-source 192.168.1.95 /usr/sbin/ip route flush cache (this script was made by fukawi2, I don't know how to use iptables) but I have no Internet connection... output of iptables -t mangle -nvL Chain PREROUTING (policy ACCEPT 1254K packets, 1519M bytes) pkts bytes target prot opt in out source destination 1278K 1535M CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK restore 21532 15M ACCEPT all -- * * 0.0.0.0/0 0.0.0.0/0 /* this stream is already marked; escape early */ mark match ! 0x0 582 72579 MARK-gw1 all -- eth0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 2376 696K MARK-gw2 all -- wlan0 * 0.0.0.0/0 0.0.0.0/0 /* prevent asynchronous routing */ ctstate NEW 1257K 1520M DEF_POL all -- * * 0.0.0.0/0 0.0.0.0/0 Chain INPUT (policy ACCEPT 1276K packets, 1535M bytes) pkts bytes target prot opt in out source destination Chain FORWARD (policy ACCEPT 0 packets, 0 bytes) pkts bytes target prot opt in out source destination Chain OUTPUT (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain POSTROUTING (policy ACCEPT 870K packets, 97M bytes) pkts bytes target prot opt in out source destination Chain DEF_POL (1 references) pkts bytes target prot opt in out source destination 1236K 1517M CONNMARK tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 15163 2041K CONNMARK udp -- * * 0.0.0.0/0 0.0.0.0/0 /* default balancing */ ctstate RELATED,ESTABLISHED CONNMARK restore 555 33176 MARK-gw1 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 555 33176 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 tcp */ ctstate NEW statistic mode nth every 2 277 16516 MARK-gw2 tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 277 16516 ACCEPT tcp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 tcp */ ctstate NEW statistic mode nth every 2 packet 1 1442 384K MARK-gw1 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 1442 384K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw1 udp */ ctstate NEW statistic mode nth every 2 720 189K MARK-gw2 udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 720 189K ACCEPT udp -- * * 0.0.0.0/0 0.0.0.0/0 /* balance gw2 udp */ ctstate NEW statistic mode nth every 2 packet 1 Chain MARK-gw1 (3 references) pkts bytes target prot opt in out source destination 2579 490K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.178.1 */ MARK set 0x65 2579 490K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 2579 490K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0 Chain MARK-gw2 (3 references) pkts bytes target prot opt in out source destination 3373 901K MARK all -- * * 0.0.0.0/0 0.0.0.0/0 /* send via 192.168.1.1 */ MARK set 0x66 3373 901K CONNMARK all -- * * 0.0.0.0/0 0.0.0.0/0 CONNMARK save 3373 901K RETURN all -- * * 0.0.0.0/0 0.0.0.0/0

    Read the article

  • Professional Scrum Developer (.NET) Training in London

    - by Martin Hinshelwood
    On the 26th - 30th July in Microsoft’s offices in London Adam Cogan from SSW will be presenting the first Professional Scrum Developer course in the UK. I will be teaching this course along side Adam and it is a fantastic experience. You are split into teams and go head-to-head to deliver units of potentially shippable work in four two hour sprints. The Professional Scrum Developer course is the only course endorsed by both Microsoft and Ken Schwaber and they have worked together very effectively in brining this course to fruition. This course is the brain child of Richard Hundhausen, a Microsoft Regional Director, and both Adam and I attending the Trainer Prep in Sydney when he was there earlier this year. He is a fantastic trainer and no matter where you do this course you can be safe in the knowledge that he has trained and vetted all of the teachers. A tools version of Ken if you will Find a course and register Download this syllabus Download the Scrum Guide What is the Professional Scrum Developer course all about? Professional Scrum Developer course is a unique and intensive five-day experience for software developers. The course guides teams on how to turn product requirements into potentially shippable increments of software using the Scrum framework, Visual Studio 2010, and modern software engineering practices. Attendees will work in self-organizing, self-managing teams using a common instance of Team Foundation Server 2010. Who should attend this course? This course is suitable for any member of a software development team – architect, programmer, database developer, tester, etc. Entire teams are encouraged to attend and experience the course together, but individuals are welcome too. Attendees will self-organize to form cross-functional Scrum teams. These teams require an aggregate of skills specific to the selected case study. Please see the last page of this document for specific details. Product Owners, ScrumMasters, and other stakeholders are welcome too, but keep in mind that everyone who attends will be expected to commit to work and pull their weight on a Scrum team. What should you know by the end of the course? Scrum will be experienced through a combination of lecture, demonstration, discussion, and hands-on exercises. Attendees will learn how to do Scrum correctly while being coached and critiqued by the instructor, in the following topic areas: Form effective teams Explore and understand legacy “Brownfield” architecture Define quality attributes, acceptance criteria, and “done” Create automated builds How to handle software hotfixes Verify that bugs are identified and eliminated Plan releases and sprints Estimate product backlog items Create and manage a sprint backlog Hold an effective sprint review Improve your process by using retrospectives Use emergent architecture to avoid technical debt Use Test Driven Development as a design tool Setup and leverage continuous integration Use Test Impact Analysis to decrease testing times Manage SQL Server development in an Agile way Use .NET and T-SQL refactoring effectively Build, deploy, and test SQL Server databases Create and manage test plans and cases Create, run, record, and play back manual tests Setup a branching strategy and branch code Write more maintainable code Identify and eliminate people and process dysfunctions Inspect and improve your team’s software development process What does the week look like? This course is a mix of lecture, demonstration, group discussion, simulation, and hands-on software development. The bulk of the course will be spent working as a team on a case study application delivering increments of new functionality in mini-sprints. Here is the week at a glance: Monday morning and most of the day Friday will be spent with the computers powered off, so you can focus on sharpening your game of Scrum and avoiding the common pitfalls when implementing it. The Sprints Timeboxing is a critical concept in Scrum as well as in this course. We expect each team and student to understand and obey all of the timeboxes. The timebox duration will always be clearly displayed during each activity. Expect the instructor to enforce it. Each of the ½ day sprints will roughly follow this schedule: Component Description Minutes Instruction Presentation and demonstration of new and relevant tools & practices 60 Sprint planning meeting Product owner presents backlog; each team commits to delivering functionality 10 Sprint planning meeting Each team determines how to build the functionality 10 The Sprint The team self-organizes and self-manages to complete their tasks 120 Sprint Review meeting Each team will present their increment of functionality to the other teams = 30 Sprint Retrospective A group retrospective meeting will be held to inspect and adapt 10 Each team is expected to self-organize and manage their own work during the sprint. Pairing is highly encouraged. The instructor/product owner will be available if there are questions or impediments, but will be hands-off by default. You should be prepared to communicate and work with your team members in order to achieve your sprint goal. If you have development-related questions or get stuck, your partner or team should be your first level of support. Module 1: INTRODUCTION This module provides a chance for the attendees to get to know the instructors as well as each other. The Professional Scrum Developer program, as well as the day by day agenda, will be explained. Finally, the Scrum team will be selected and assembled so that the forming, storming, norming, and performing can begin. Trainer and student introductions Professional Scrum Developer program Agenda Logistics Team formation Retrospective Module 2: SCRUMDAMENTALS This module provides a level-setting understanding of the Scrum framework including the roles, timeboxes, and artifacts. The team will then experience Scrum firsthand by simulating a multi-day sprint of product development, including planning, review, and retrospective meetings. Scrum overview Scrum roles Scrum timeboxes (ceremonies) Scrum artifacts Simulation Retrospective It’s required that you read Ken Schwaber’s Scrum Guide in preparation for this module and course. MODULE 3: IMPLEMENTING SCRUM IN VISUAL STUDIO 2010 This module demonstrates how to implement Scrum in Visual Studio 2010 using a Scrum process template*. The team will learn the mapping between the Scrum concepts and how they are implemented in the tool. After connecting to the shared Team Foundation Server, the team members will then return to the simulation – this time using Visual Studio to manage their product development. Mapping Scrum to Visual Studio 2010 User Story work items Task work items Bug work items Demonstration Simulation Retrospective Module 4: THE CASE STUDY In this module the team is introduced to their problem domain for the week. A kickoff meeting by the Product Owner (the instructor) will set the stage for the why and what that will take during the upcoming sprints. The team will then define the quality attributes of the project and their definition of “done.” The legacy application code will be downloaded, built, and explored, so that any bugs can be discovered and reported. Introduction to the case study Download the source code, build, and explore the application Define the quality attributes for the project Define “done” How to file effective bugs in Visual Studio 2010 Retrospective Module 5: HOTFIX This module drops the team directly into a Brownfield (legacy) experience by forcing them to analyze the existing application’s architecture and code in order to locate and fix the Product Owner’s high-priority bug(s). The team will learn best practices around finding, testing, fixing, validating, and closing a bug. How to use Architecture Explorer to visualize and explore Create a unit test to validate the existence of a bug Find and fix the bug Validate and close the bug Retrospective Module 6: PLANNING This short module introduces the team to release and sprint planning within Visual Studio 2010. The team will define and capture their goals as well as other important planning information. Release vs. Sprint planning Release planning and the Product Backlog Product Backlog prioritization Acceptance criteria and tests Sprint planning and the Sprint Backlog Creating and linking Sprint tasks Retrospective At this point the team will have the knowledge of Scrum, Visual Studio 2010, and the case study application to begin developing increments of potentially shippable functionality that meet their definition of done. Module 7: EMERGENT ARCHITECTURE This module introduces the architectural practices and tools a team can use to develop a valid design on which to develop new functionality. The teams will learn how Scrum supports good architecture and design practices. After the discussion, the teams will be presented with the product owner’s prioritized backlog so that they may select and commit to the functionality they can deliver in this sprint. Architecture and Scrum Emergent architecture Principles, patterns, and practices Visual Studio 2010 modeling tools UML and layer diagrams SPRINT 1 Retrospective Module 8: TEST DRIVEN DEVELOPMENT This module introduces Test Driven Development as a design tool and how to implement it using Visual Studio 2010. To maximize productivity and quality, a Scrum team should setup Continuous Integration to regularly build every team member’s code changes and run regression tests. Refactoring will also be defined and demonstrated in combination with Visual Studio’s Test Impact Analysis to efficiently re-run just those tests which were impacted by refactoring. Continuous integration Team Foundation Build Test Driven Development (TDD) Refactoring Test Impact Analysis SPRINT 2 Retrospective Module 9: AGILE DATABASE DEVELOPMENT This module lets the SQL Server database developers in on a little secret – they can be agile too. By using the database projects in Visual Studio 2010, the database developers can join the rest of the team. The students will see how to apply Agile database techniques within Visual Studio to support the SQL Server 2005/2008/2008R2 development lifecycle. Agile database development Visual Studio database projects Importing schema and scripts Building and deploying Generating data Unit testing SPRINT 3 Retrospective Module 10: SHIP IT Teams need to know that just because they like the functionality doesn’t mean the Product Owner will. This module revisits acceptance criteria as it pertains to acceptance testing. By refining acceptance criteria into manual test steps, team members can execute the tests, recording the results and reporting bugs in a number of ways. Manual tests will be defined and executed using the Microsoft Test Manager tool. As the Sprint completes and an increment of functionality is delivered, the team will also learn why and when they should create a branch of the codeline. Acceptance criteria Testing in Visual Studio 2010 Microsoft Test Manager Writing and running manual tests Branching SPRINT 4 Retrospective Module 11: OVERCOMING DYSFUNCTION This module introduces the many types of people, process, and tool dysfunctions that teams face in the real world. Many dysfunctions and scenarios will be identified, along with ideas and discussion for how a team might mitigate them. This module will enable you and your team to move toward independence and improve your game of Scrum when you depart class. Scrum-butts and flaccid Scrum Best practices working as a team Team challenges ScrumMaster challenges Product Owner challenges Stakeholder challenges Course Retrospective What will be expected of you and you team? This is a unique course in that it’s technically-focused, team-based, and employs timeboxes. It demands that the members of the teams self-organize and self-manage their own work to collaboratively develop increments of software. All attendees must commit to: Pay attention to all lectures and demonstrations Participate in team and group discussions Work collaboratively with other team members Obey the timebox for each activity Commit to work and do your best to deliver All teams should have these skills: Understanding of Scrum Familiarity with Visual Studio 201 C#, .NET 4.0 & ASP.NET 4.0 experience*  SQL Server 2008 development experience Software testing experience * Check with the instructor ahead of time for the exact technologies Self-organising teams Another unique attribute of this course is that it’s a technical training class being delivered to teams of developers, not pairs, and not individuals. Ideally, your actual software development team will attend the training to ensure that all necessary skills are covered. However, if you wish to attend an open enrolment course alone or with just a couple of colleagues, realize that you may be placed on a team with other attendees. The instructor will do his or her best to ensure that each team is cross-functional to tackle the case study, but there are no guarantees. You may be required to try a new role, learn a new skill, or pair with somebody unfamiliar to you. This is just good Scrum! Who should NOT take this course? Because of the nature of this course, as explained above, certain types of people should probably not attend this course: Students requiring command and control style instruction – there are no prescriptive/step-by-step (think traditional Microsoft Learning) labs in this course Students who are unwilling to work within a timebox Students who are unwilling to work collaboratively on a team Students who don’t have any skill in any of the software development disciplines Students who are unable to commit fully to their team – not only will this diminish the student’s learning experience, but it will also impact their team’s learning experience Find a course and register Download this syllabus Download the Scrum Guide Technorati Tags: Scrum,SSW,Pro Scrum Dev

    Read the article

  • What functional language is most suited to create games with?

    - by Ricket
    I have had my eye on functional programming languages for a while, but am hesitating to actually get into them. But I think it's about time I at least starting glancing that direction to make sure I'm ready for anything. I've seen talk of Haskell, F#, Scala, and so on. But I have no clue the differences between the languages and their communities, nor do I particularly care; except in the context of game development. So, from a game development standpoint, which functional programming language has the most features suited for game programming? For example, are there any functional game development libraries/engines/frameworks or graphics engines for functional languages? Is there a language that handles certain data structures which are commonly used in game development better? Bottom line: what functional programming language is best for functional game programming, and why? I believe/hope this question will declare a clear best language therefore I haven't marked it CW despite its subjective tendency.

    Read the article

  • What does the BIOS setting XHCI Pre-Boot Mode do?

    - by Jamie Kitson
    I have a BIOS setting called XHCI Pre-Boot Mode. If I have this enabled USB devices which aren't plugged in at boot are never recognised, if I set it to Disabled then USB devices work normally. The brief BIOS description says "Enable this option if you need USB3.0 support in DOS." Which I don't, but it also says "Please note that XHCI controller will be disabled if you set this item as Disabled." So does that mean that USB3 is disabled with this option? Here's a picture of the screen: http://www.flickr.com/photos/jamiekitson/8017563004/

    Read the article

  • Can the Dell PowerEdge T320 boot from a USB stick in hard drive emulation mode?

    - by Icydog
    I am trying to install FreeNAS on a Dell PowerEdge T320. I've followed the instructions here (http://doc.freenas.org/index.php/Burning_an_IMG_File) to write the .img file to four different USB sticks with dd if=FreeNAS-9.2.1.5-RELEASE-x64.img of=/dev/sde, but I can't boot off of any of them. When I try to boot, I get the following screen: F1 FreeBSD F2 FreeBSD F3 Drive 0 Then it auto-selects F1 and either prints # about once a second forever, or it sits with a blinking cursor doing nothing. Forum posts and the wiki page linked above say to set the USB boot emulation mode to hard disk (as opposed to auto or CD/DVD/floppy), and older Dell PowerEdge models including the T310 have this option in the BIOS settings, but I cannot find it on the T320 with the latest BIOS 2.1.2. I have even tried writing the USB image to a USB hard disk, but with the same lack of success. Has anyone been able to find this USB boot setting on a T320, or been able to boot FreeNAS/FreeBSD from USB in some other way on a T320?

    Read the article

  • Is bonding mode=5 a solution against MAC flapping?

    - by Yuri
    There is two are interconnected Cisco WS-2950T. By the one GBIC port on first switch connected a first NIC of bonding interface, and by the one GBIC port on second switch connected a second NIC of bonding interface. Of course the both switches sees the bonding MAC-address only on one interface (eg it is GBIC on first switch) and all incoming traffic for bonding interface passes through this GBIC. But in "mode=5" all outgoing traffic are distributed between the all interfaces that make bond. In this case, the packets will be dropped from the second switch and anyway will going through the first switch? Or the division will be working?

    Read the article

  • Mac OS X Server 10.6.6 DNS not responding properly, get a "Truncated, retrying in TCP mode" for subdomain

    - by Eric Arseneau
    If I do an nslookup on youtube.com, no problem, if I do one with www.youtube.com, failure. See details below. [~] nslookup youtube.com Server: 192.168.1.1 Address: 192.168.1.1#53 Non-authoritative answer: Name: youtube.com Address: 74.125.127.93 Name: youtube.com Address: 74.125.47.93 Name: youtube.com Address: 74.125.95.93 [~] nslookup www.youtube.com ;; Truncated, retrying in TCP mode. ;; Connection to 192.168.1.1#53(192.168.1.1) for www.youtube.com failed: connection refused. If I do the same from a Windows machine its fine, its when I do it from a Mac workstation that I get the issue. I have rebooted, both server and workstation, I did a changeip, but nothing is working. Any recommendations?

    Read the article

  • Silverlight DataGrid not stretching to accommodate all items in data source?

    - by bplus
    I'm having problems getting a Silverlight DataGrid to stretch to accommodate all the items in it's dataSource. I've got a Grid that contains two DataGrids. I've tried setting height=Auto on the Grid and the DataGrids. I've tried setting HorizontalContentAlignment="Stretch" on the Grid and the DataGrids. The object tag has height="100%" I've set the Height="*" on the RowDefinitions for the Grid Any help would be very much appreciated! Here's the code listing: <UserControl x:Class="TimeSheet.SilverLight.MainPage" xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" xmlns:local="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls.Data" mc:Ignorable="d"> <Grid x:Name="LayoutRoot" Height="Auto" ShowGridLines="True" HorizontalAlignment="Stretch" > <Grid.RowDefinitions> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> <RowDefinition Height="*"/> </Grid.RowDefinitions> <local:DataGrid BorderThickness="5" HorizontalContentAlignment="Stretch" AutoGenerateColumns="False" VerticalAlignment="Top" x:Name="NonProjectGrid" Grid.Row="0"> <local:DataGrid.Columns> <local:DataGridTextColumn Header="Activity" Binding="{Binding TaskName}" /> <local:DataGridTextColumn Header="Monday" Binding="{Binding Monday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Tuesday" Binding="{Binding Tuesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Wednesday" Binding="{Binding Wednesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Thursday" Binding="{Binding Thursday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Friday" Binding="{Binding Friday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Saturday" Binding="{Binding Saturday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Sunday" Binding="{Binding Sunday, Mode=TwoWay}" /> </local:DataGrid.Columns> </local:DataGrid> <local:DataGrid BorderThickness="5" HorizontalContentAlignment="Stretch" AutoGenerateColumns="False" VerticalAlignment="Top" x:Name="ProjectGrid" Grid.Row="2"> <local:DataGrid.Columns> <local:DataGridTextColumn Header="Bug Number" Binding="{Binding BugNo}" /> <local:DataGridTextColumn Header="Activity" Binding="{Binding TaskName}" /> <local:DataGridTextColumn Header="Monday" Binding="{Binding Monday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Tuesday" Binding="{Binding Tuesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Wednesday" Binding="{Binding Wednesday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Thursday" Binding="{Binding Thursday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Friday" Binding="{Binding Friday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Saturday" Binding="{Binding Saturday, Mode=TwoWay}" /> <local:DataGridTextColumn Header="Sunday" Binding="{Binding Sunday, Mode=TwoWay}" /> </local:DataGrid.Columns> </local:DataGrid> <Button Name="AddBugBtn" Width="125" Height="25" Content="Add From Bugzilla" Click="AddBug_Click" Grid.Row="3" HorizontalAlignment="Right"></Button> <Button Name="SaveBtn" Width="125" Height="25" Content="Save" Click="Save_Click" Grid.Row="3" HorizontalAlignment="Left"></Button> </Grid>

    Read the article

< Previous Page | 324 325 326 327 328 329 330 331 332 333 334 335  | Next Page >