Search Results

Search found 32625 results on 1305 pages for 'ui development'.

Page 393/1305 | < Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >

  • How to Open Multiple PopUp windows with JQuery on form submission (popUps should be relative to form

    - by Ole Jak
    I have A HTML form like this: <form> <fieldset class="ui-widget-content ui-corner-all"> <select id="Streams" class="multiselect ui-widget-content ui-corner-all" multiple="multiple" name="Streams[]"> <option value="35"> Example Name (35)</option> <option value="44"> Example Name (44)</option> <option value="5698"> Example Name (5698)</option> <option value="777"> Example Name (777)</option> <option value="12"> Example Name (12)</option> </select> <input type="submit" class="ui-state-default ui-corner-all" name="submitForm" id="submitForm" value="Play Stream from selected URL's!"/> </fieldset> </form> in my JS + HTML page I use JQuery. As you can see I allow user to select multiple Items. I want to open on Submit button click as many popup windows as many Items he selected in a list. Each popUp window should open some url like www.example.com/test.php?id=OPTION_SELECTED . And here I mean by PopUp Window a real browser window. So for each of the selected options I ll get a pop up window whith diferent url. Please Help.

    Read the article

  • "Subforms" associated with tree view in VB

    - by knomdeguerre
    I am using VB Express 2008 to demonstrate my ideas for an improved UI for an existing product for my colleagues at work. The current UI has a certain page with ten tabs, allowing the user to define up to ten "things". The available choices for each of the ten "things" are all the same. On each of the ten tabs, there is a checkbox to enable that definition. Generally, a user will never use more than 5 or 6 unique definitions, the rest will remain disabled. So far, my prototype has a tree view control with one branch to contain this list of definitions, Add and Delete buttons. My idea is: there is one sub-branch to start with (corresponding to the first tab in the current UI); if the user wants addtional definitions, they click Add and other branches are added to the tree view, up to maximum of ten. I think I should be able to create a "class" that has a sub-UI (like a sub-form in Access) along with behavior code, that can be instantiated with each press of the Add button; each instantiation's settings can be set independently and is displayed in the main UI form )in a panel or frame) when selected in the tree view. For example, suppose the user Adds to make a total of three definitions: the tree view now has three sub-branches, each of which presents the same sub-UI with settings that can be set specific to the selected sub-branch. I'm sure it's possible but not sure how to do it. I know a comprehensive "answer" might be complicated and long, but I may just need some quick hints to get underway - don't be shy! Thanks in advance!

    Read the article

  • How to design app to be modular / support plugins

    - by Lee
    I'm currently in the process of refactoring my webplayer so that we'll be more easily able to run it on our other internet radio stations. Much of the setup between these players will be very similar, however, some will need to have different UI plugins / other plugins. Currently in the webplayer I do something like this in it's init(): _this.ui = new UI(); _this.ui.playlist = new Playlist(); _this.ui.channelDropdown = new ChannelDropdown(); _this.ui.timecode = ne Timecode(); etc etc This works fine but that blocks me into requiring those objects at run time. What I'd like to do is be able to add those based on the stations needs. Basically my question is, do I need to add some kind of "addPlugin()" functionality here? And if I do that, do I need to constantly check from my WebPlayer object if that plugin exists before it attempts to use it? Like... if (_hasPlugin('playlist')) this.plugins.playlist.add(track); I apologize if some of this might not be clear... really trying to get my head wrapped around all of this. I feel I'm closer but I'm still stuck. Any advice on how I should proceed with this would be greatly appreciated. Thanks in advance, Lee

    Read the article

  • System Error when running PyQt4's loadUi()

    - by user633804
    Hello, I'm pretty new to Qt, Python and their combinations. I'm currently writing a QGIS plugin in python (I used QtCreator 2.1 (Qt Designer 4.7) to generate a .ui-file and am now trying to use it for a Quantum GIS plugin that's written in Python 2.5 (and running in the Quantum GIS Python 2.5 console)). I am running into trouble when loading the ui-file dynamically when the program runs the loadUi() function. What throws me off is that the error occurs outside my script. Does that mean, I'm passing something wrong into it? Where does the error come in? Any hints on what could be wrong? code_dir = os.path.dirname(os.path.abspath(__file__)) self.ui = loadUi(os.path.join(code_dir, "Ui_myfile.ui"), self) This is the Error Code I am getting (minus the first paragraph): File "C:/Dokumente und Einstellungen/name.name/.qgis/python/plugins\myfile\myfile_gui.py", line 42, in __ init __ self.ui = loadUi(os.path.join(code_dir, "Ui_myfile.ui"), self) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic__init__.py", line 112, in loadUi return DynamicUILoader().loadUi(uifile, baseinstance) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\Loader\loader.py", line 21, in loadUi return self.parse(filename) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 768, in parse actor(elem) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 616, in createUserInterface self.traverseWidgetTree(elem) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 594, in traverseWidgetTree handler(self, child) File "C:\PROGRA~1\QUANTU~1\apps\Python25\lib\site-packages\PyQt4\uic\uiparser.py", line 233, in createWidget topwidget.setCentralWidget(widget) SystemError: error return without exception set

    Read the article

  • Javascript access object variables from functions

    - by Parhs
    function init_exam_chooser(id,mode) { this.id=id; this.table=$("#" + this.id); this.htmlRowOrder="<tr>" + $("#" + this.id + " tbody tr:eq(0)").html() + "</tr>"; this.htmlRowNew="<tr>" + $("#" + this.id + " tbody tr:eq(1)").html() + "</tr>"; $("#" + this.id + " tbody tr").remove(); //Arxikopoiisi var rowNew=$(this.htmlRowNew); rowNew.find("input[type='text']").eq(0).autocomplete({ source: function (req,resp) { $.ajax({ url: "/medilab/prototypes/exams/searchQuick", cache: false, dataType: "json", data:{codeName:req.term}, success: function(data) { resp(data); } }); }, focus: function(event,ui) { return false; }, minLength :2 }); rowNew.find("input[type='text']").eq(1).autocomplete({ source: function (req,resp) { $.ajax({ url: "/medilab/prototypes/exams/searchQuick", cache: false, dataType: "json", data:{name:req.term}, success: function(data) { resp(data); } }); }, focus: function(event,ui) { return false; }, minLength :2 }); rowNew.find("input[type='text']").bind( "autocompleteselect", function(event, ui) { alert(htmlRowOrder); var row=$(htmlRowOrder); $(table).find("tbody tr:last").before(row); alert(ui.item.id); }); rowNew.appendTo($(this.table).find("tbody")); //this.htmlRowNew } The problem is at ,how i can access htmlRowOrder? I tried this.htmlRowOrder and didnt work.... Any ideas?? rowNew.find("input[type='text']").bind( "autocompleteselect", function(event, ui) { alert(htmlRowOrder); var row=$(htmlRowOrder); $(table).find("tbody tr:last").before(row); alert(ui.item.id); });

    Read the article

  • jQuery trigger function on change

    - by Michael Pasqualone
    I have the following two slider functions which work well and display like so: I have diskAmount and transferAmount stored in global vars, however what I am now trying to figure out is how do I get the sum of the two to initially show as the monthly fee, and then update when either of the two sliders are changed. So in my screenshot above the initial state will be $24.75, and the fee will update if the sliders change. Can anyone point me in the right direction? I get can the initial fee to show, but can't figure out how to get 1 function to call another function within jQuery. $(function() { $("#disk").slider({ value:3, min: 1, max: 80, step: 1, slide: function(event, ui) { $("#diskamount").val(ui.value + ' Gb'); $("#diskamountUnit").val('$' + parseFloat(ui.value * diskCost).toFixed(2)); } }); // Set initial diskamount state $("#diskamount").val($("#disk").slider("value") + ' Gb'); // Set initial diskamountUnit state diskAmount = $("#disk").slider("value") * diskCost; $("#diskamountUnit").val('$' + diskAmount.toFixed(2)); }); $(function() { $("#data").slider({ value:25, min: 1, max: 200, step: 1, slide: function(event, ui) { $("#dataamount").val(ui.value + ' Gb'); $("#dataamountUnit").val('$' + parseFloat(ui.value * transferCost).toFixed(2)); } }); // Set initial dataamount state $("#dataamount").val($("#data").slider("value") + ' Gb'); // Set initial dataamountUnit state transferAmount = $("#data").slider("value") * transferCost; $("#dataamountUnit").val('$' + transferAmount.toFixed(2)); });

    Read the article

  • Is a text file with names/pixel locations something a graphic artist can/should produce? [on hold]

    - by edA-qa mort-ora-y
    I have an artist working on 2D graphics for a game UI. He has no problem producing a screenshot showing all the bits, but we're having some trouble exporting this all into an easy-to-use format. For example, take the game HUD, which is a bunch of elements laid out around the screen. He exports the individual graphics for each one, but how should he communicate the positioning of each of them? My desire is to have a yaml file (or some other simple markup file) that contains the name of each asset and pixel position of that element. For example: fire_icon: pos: 20, 30 fire_bar: pos: 30, 80 Is producing such files a common task of a graphic artist? Is is reasonable to request them to produce such files as part of their graphic work?

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – 4 Tips for ETL Software IDE Developers

    - by pinaldave
    In a previous blog, I introduced the notion of Semantic Types. To an end-user, a seamlessly integrated semantic typing engine significantly increases the ease of use of an ETL IDE (integrated development environment, or developer studio). This led me to think about other ease-of-use issues I have encountered while building ETL applications. When I get stumped while programming, I find myself asking the variations on these questions: “How do I…?” “Now what?” “Why isn’t this working?” “Why do I have to redo the work I just did?” It seems to me that a good ETL IDE will anticipate these questions and seek to answer them before they are even asked. So here are my tips to help software vendors build developer IDEs that actually make development easier. How do I…? While developing an ETL application, have you ever asked yourself: “How do I set up the connection to my SQL Server database?”,“How do I import my table definitions from Access?”, etc. An easy answer might be “read the manual” but sometimes product manuals are not robust or easily accessible. So, integrating robust how-to instructions directly into your ETLstudio would help users get the information they need at the time they need it. Now what? IDEs in general know where you last clicked or performed an action using an input device such as a keyboard; so they should be able to reasonably predict the design context you are in and suggest the next steps accordingly. Context-sensitive suggestions based on the state of the user’s work will help users move forward in ETL application development. Why isn’t this working? Or why do I have to wait till I compile to be told about a critical design issue? If an ETL IDE is smart enough to signal to users what in their design structures is left to be completed or has been completed incorrectly, then the developer can spend much less time in the designàcompileàerror-correct loop. Just-in-time validation helps users detect and correct programming errors earlier in the ETL development life cycle. Why do I have to redo the work I just did? In ETL development, schemas, transformation rules, connectivity objects, etc., can be reused in various situations. Using mouse-clicks to build and manage libraries of reusable design objects implies that the application development effort should decrease over time and as the library acquires more objects. I met a great company at SQL Pass that is trying to address many of these usability issues. Check them out at www.expressor-software.com. What other ease-of-use suggestions do you have for ETL software vendors? Please post your valuable comments. ?Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Best Practices, Pinal Dave, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: ETL

    Read the article

  • Building The Right SharePoint Team For Your Organization

    - by Mark Rackley
    I see the question posted fairly often asking what kind SharePoint team an organization should have. How many people do I need? What roles do I need to fill? What is best for my organization? Well, just like every other answer in SharePoint, the correct answer is “it depends”. Do you ever get sick of hearing that??? I know I do… So, let me give you my thoughts and opinions based upon my experience and what I’ve seen and let you come to your own conclusions. What are the possible SharePoint roles? I guess the first thing you need to understand are the different roles that exist in SharePoint (and their are LOTS). Remember, SharePoint is a massive beast and you will NOT find one person who can do it all. If you are hoping to find that person you will be sorely disappointed. For the most part this is true in SharePoint 2007 and 2010. However, generally things are improved in 2010 and easier for junior individuals to grasp. SharePoint Administrator The absolutely positively only role that you should not be without no matter the size of your organization or SharePoint deployment is a SharePoint administrator. These guys are essential to keeping things running and figuring out what’s wrong when things aren’t running well. These unsung heroes do more before 10 am than I do all day. The bad thing is, when these guys are awesome, you don’t even know they exist because everything is running so smoothly. You should definitely invest some time and money here to make sure you have some competent if not rockstar help. You need an admin who truly loves SharePoint and will go that extra mile when necessary. Let me give you a real world example of what I’m talking about: We have a rockstar admin… and I’m sure she’s sick of my throwing her name around so she’ll just have to live with remaining anonymous in this post… sorry Lori… Anyway! A couple of weeks ago our Server teams came to us and said Hi Lori, I’m finalizing the MOSS servers and doing updates that require a restart; can I restart them? Seems like a harmless request from your server team does it not? Sure, go ahead and apply the patches and reboot during our scheduled maintenance window. No problem? right? Sounded fair to me… but no…. not to our fearless SharePoint admin… I need a complete list of patches that will be applied. There is an update that is out there that will break SharePoint… KB973917 is the patch that has been shown to cause issues. What? You mean Microsoft released a patch that would actually adversely affect SharePoint? If we did NOT have a rockstar admin, our server team would have applied these patches and then when some problem occurred in SharePoint we’d have to go through the fun task of tracking down exactly what caused the issue and resolve it. How much time would that have taken? If you have a junior SharePoint admin or an admin who’s not out there staying on top of what’s going on you could have spent days tracking down something so simple as applying a patch you should not have applied. I will even go as far to say the only SharePoint rockstar you NEED in your organization is a SharePoint admin. You can always outsource really complicated development projects or bring in a rockstar contractor every now and then to make sure you aren’t way off track in other areas. For your day-to-day sanity and to keep SharePoint running smoothly, you need an awesome Admin. Some rockstars in this category are: Ben Curry, Mike Watson, Joel Oleson, Todd Klindt, Shane Young, John Ferringer, Sean McDonough, and of course Lori Gowin. SharePoint Developer Another essential role for your SharePoint deployment is a SharePoint developer. Things do start to get a little hazy here and there are many flavors of “developers”. Are you writing custom code? using SharePoint Designer? What about SharePoint Branding?  Are all of these considered developers? I would say yes. Are they interchangeable? I’d say no. Development in SharePoint is such a large beast in itself. I would say that it’s not so large that you can’t know it all well, but it is so large that there are many people who specialize in one particular category. If you are lucky enough to have someone on staff who knows it all well, you better make sure they are well taken care of because those guys are ready-made to move over to a consulting role and charge you 3 times what you are probably paying them. :) Some of the all-around rockstars are Eric Shupps, Andrew Connell (go Razorbacks), Rob Foster, Paul Schaeflein, and Todd Bleeker SharePoint Power User/No-Code Solutions Developer These SharePoint Swiss Army Knives are essential for quick wins in your organization. These people can twist the out-of-the-box functionality to make it do things you would not even imagine. Give these guys SharePoint Designer, jQuery, InfoPath, and a little time and they will create views, dashboards, and KPI’s that will blow your mind away and give your execs the “wow” they are looking for. Not only can they deliver that wow factor, but they can mashup, merge, and really help make your SharePoint application usable and deliver an overall better user experience. Before you hand off a project to your SharePoint Custom Code developer, let one of these rockstars look at it and show you what they can do (in probably less time). I would say the second most important role you can fill in your organization is one of these guys. Rockstars in this category are Christina Wheeler, Laura Rogers, Jennifer Mason, and Mark Miller SharePoint Developer – Custom Code If you want to really integrate SharePoint into your legacy systems, or really twist it and make it bend to your will, you are going to have to open up Visual Studio and write some custom code.  Remember, SharePoint is essentially just a big, huge, ginormous .NET application, so you CAN write code to make it do ANYTHING, but do you really want to spend the time and effort to do so? At some point with every other form of SharePoint development you are going to run into SOME limitation (SPD Workflows is the big one that comes to mind). If you truly want to knock down all the walls then custom development is the way to go. PLEASE keep in mind when you are looking for a custom code developer that a .NET developer does NOT equal a SharePoint developer. Just SOME of the things these guys write are: Custom Workflows Custom Web Parts Web Service functionality Import data from legacy systems Export data to legacy systems Custom Actions Event Receivers Service Applications (2010) These guys are also the ones generally responsible for packaging everything up into solution packages (you are doing that, right?). Rockstars in this category are Phil Wicklund, Christina Wheeler, Geoff Varosky, and Brian Jackett. SharePoint Branding “But it LOOKS like SharePoint!” Somebody call the WAAAAAAAAAAAAHMbulance…   Themes, Master Pages, Page Layouts, Zones, and over 2000 styles in CSS.. these guys not only have to be comfortable with all of SharePoint’s quirks and pain points when branding, but they have to know it TWICE for publishing and non-publishing sites.  Not only that, but these guys really need to have an eye for graphic design and be able to translate the ramblings of business into something visually stunning. They also have to be comfortable with XSLT, XML, and be able to hand off what they do to your custom developers for them to package as solutions (which you are doing, right?). These rockstars include Heater Waterman, Cathy Dew, and Marcy Kellar SharePoint Architect SharePoint Architects are generally SharePoint Admins or Developers who have moved into more of a BA role? Is that fair to say? These guys really have a grasp and understanding for what SharePoint IS and what it can do. These guys help you structure your farms to meet your needs and help you design your applications the correct way. It’s always a good idea to bring in a rockstar SharePoint Architect to do a sanity check and make sure you aren’t doing anything stupid.  Most organizations probably do not have a rockstar architect on staff. These guys are generally brought in at the deployment of a farm, upgrade of a farm, or for large development projects. I personally also find architects very useful for sitting down with the business to translate their needs into what SharePoint can do. A good architect will be able to pick out what can be done out-of-the-box and what has to be custom built and hand those requirements to the development Staff. Architects can generally fill in as an admin or a developer when needed. Some rockstar architects are Rick Taylor, Dan Usher, Bill English, Spence Harbar, Neil Hodgkins, Eric Harlan, and Bjørn Furuknap. Other Roles / Specialties On top of all these other roles you also get these people who specialize in things like Reporting, BDC (BCS in 2010), Search, Performance, Security, Project Management, etc... etc... etc... Again, most organizations will not have one of these gurus on staff, they’ll just pay out the nose for them when they need them. :) SharePoint End User Everyone else in your organization that touches SharePoint falls into this category. What they actually DO in SharePoint is determined by your governance and what permissions you give these guys. Hopefully you have these guys on a fairly short leash and are NOT giving them access to tools like SharePoint Designer. Sadly end users are the ones who truly make your deployment a success by using it, but are also your biggest enemy in breaking it.  :)  We love you guys… really!!! Okay, all that’s fine and dandy, but what should MY SharePoint team look like? It depends! Okay… Are you just doing out of the box team sites with no custom development? Then you are probably fine with a great Admin team and a great No-Code Solution Development team. How many people do you need? Depends on how busy you can keep them. Sorry, can’t answer the question about numbers without knowing your specific needs. I can just tell you who you MIGHT need and what they will do for you. I’ll leave you with what my ideal SharePoint Team would look like for a particular scenario: Farm / Organization Structure Dev, QA, and 2 Production Farms. 5000 – 10000 Users Custom Development and Integration with legacy systems Team Sites, My Sites, Intranet, Document libraries and overall company collaboration Team Rockstar SharePoint Administrator 2-3 junior SharePoint Administrators SharePoint Architect / Lead Developer 2 Power User / No-Code Solution Developers 2-3 Custom Code developers Branding expert With a team of that size and skill set, they should be able to keep a substantial SharePoint deployment running smoothly and meet your business needs. This does NOT mean that you would not need to bring in contract help from time to time when you need an uber specialist in one area. Also, this team assumes there will be ongoing development for the life of your SharePoint farm. If you are just going to be doing sporadic custom development, it might make sense to partner with an awesome firm that specializes in that sort of work (I can give you the name of a couple if you are interested).  Again though, the size of your team depends on the number of requests you are receiving and how much active deployment you are doing. So, don’t bring in a team that looks like this and then yell at me because they are sitting around with nothing to do or are so overwhelmed that nothing is getting done. I do URGE you to take the proper time to asses your needs and determine what team is BEST for your organization. Also, PLEASE PLEASE PLEASE do not skimp on the talent. When it comes to SharePoint you really do get what you pay for when it comes to employees, contractors, and software.  SharePoint can become absolutely critical to your business and because you skimped on hiring a developer he created a web part that brings down the farm because he doesn’t know what he’s doing, or you hire an admin who thinks it’s fine to stick everything in the same Content Database and then can’t figure out why people are complaining. SharePoint can be an enormous blessing to an organization or it’s biggest curse. Spend the time and money to do it right, or be prepared to spending even more time and money later to fix it.

    Read the article

  • What are the strategies to become a good open source developer?

    - by u3050
    I always hear that involving with open source projects is good for career and the more (good) open source you release, the closer you will be to getting your dream job before you've even had an interview.I am expert in Java and I am trying to become fluent in Scala. I always think about getting involved in open source development in Java/Scala but the following confusions stopping me to do so. How/where do I start in open source development projects in GitHub etc? Or How to find active/busy open source development projects? How to find an area where improvement is required or enhancement required in such projects? It looks too complex in the first analysis or its pretty hard to find such opportunities. What are the common strategies to follow if I want to become hobbyist/free time open source developer?People who have experience in open source development please share your learnings/expertise from scratch.Thanks in advance.

    Read the article

  • Tulsa Azure Boot Camp

    - by dmccollough
    Windows Azure Boot Camp presented by HyperVize & TulsaTech When: Thursday July 1st and Friday July 2nd Registration: Click here. Where: TulsaTech Riverside Campus 801 East 91st Street Tulsa, Ok 74132-4008 Click here for a map. Summary Tulsa Windows Azure Boot Camp is a comprehensive 2 day training program for members of the development community in Tulsa Oklahoma. At the conclusion of this program, the attendees should have a deep understanding of Azure, BPOS, and advanced development techniques for both platforms. Who should attend: Web Developers, Backend Developers, SQL DBAs, Consultants, & IT Leaders who are interested in using Azure for development, data storage, or processing. Both days is suggested, but if you can't attend both days, contact us for a special one day pass. Schedule: Day one of the training sessions will be held from July 1st 2010 between the hours of 9AM and 4:30PM. Topics covered on day 1: Azure Basics, Web Development, & Data Storage. Day two of the training sessions will be held from July 2nd 2010 between the hours of 9AM and 4:30PM. Topics covered on day 2: Architecture, Business Value, SOA Development, SQL Azure, & Advanced Development. Pre-requisites: If you want to stay up to speed on the Windows Azure Labs you will need to install the tools and updates listed on the Windows Azure Boot Camp website: http://windowsazurebootcamp.com/whattobring Boot Camp Agenda Day 1 – July1st 2010:  · 8:30 – 9:00 - Registration · 9:00 – 10:00 - Module 1: Intro to Azure & Cloud Computing · 10:00 – 11:00 - Module 2: Using Web Roles · 11:00 – Noon - Lab 1 & workstation configuration · Noon – 1:00 - Lunch · 1:00 – 2:00 - Module 3: Blobs · 2:00 – 3:00 - Module 4: Tables · 3:00 – 4:00 - Module 5: Queues · 4:00 – ? - Q&A / Open Discussion Day 2 – July 2nd 2010: · 9:00 – 10:00 - Module 6: Building a business with Azure · 10:00 – 11:00 - Module 7: Cloud Scenarios · 11:00 – Noon - Module 8: SQL Azure · Noon – 1:00 - Lunch · 1:00 – 2:00 - Module 9: Basic Worker Roles · 2:00 – 3:00 - Module 10: Advanced Worker Roles · 3:00 – 4:00 - Module 11: Azure Diagnostics · 4:00 –    ??? - Module 12: App Fabric  

    Read the article

  • Kansas City .NET UG March Meeting &ndash; Tonight!!!!

    - by John Alexander
    Meeting tonight!!! Food! Great giveaways including a full license of Infragistics for a year! See you there!! Meeting for March 23rd, 2010 WHERE: Centriq Training, 8700 State Line Road, Leawood, KS (Click WHEN: 6:00 PM TOPIC: Microsoft's Security Development Lifecycle for Agile development Microsoft recently added secure development guidance for agile methodologies within their SDL. During this presentation, Nick will summarize the new guidance and discuss what makes this guidance successful for Agile development. SPEAKER: Nick Coblentz Nick Coblentz is a senior consultant within AT&T Consulting Services' Application Security Practice. He focuses on helping organizations build mature application security programs and secure development processes. Nick has provided consulting services to fortune 500 companies within the retail, financial services, banking, and health care sectors. SPONSOR: TekSystems TEKsystems® is the leading IT staffing and services company. Our capabilities span a wide range of services: from technical staff augmentation and direct placement services, to full management of IT projects and comprehensive workforce management solutions. With over 25 years of experience, we are experts at connecting technical professionals. Whether you are looking for the best IT talent, an experienced IT outsourcing partner, or a career in the IT industry, TEKsystems delivers.

    Read the article

  • Can we create desktop application with Ruby?

    - by RAJ ...
    I know the Ruby on Rails framework is only for web development and not suitable for desktop application development. But if a ruby programmer wants to develop a desktop application, is it suitable and preferable to do it with Ruby only (not jRuby, as most of the tutorials are for jRuby)? If yes, please provide some good tutorials. I want to use linux as OS for development. Please suggest something, as I am a ruby developer and wants to develop desktop application.

    Read the article

  • ASP.NET MVC, Web API, Razor and Open Source

    - by ScottGu
    Microsoft has made the source code of ASP.NET MVC available under an open source license since the first V1 release. We’ve also integrated a number of great open source technologies into the product, and now ship jQuery, jQuery UI, jQuery Mobile, jQuery Validation, Modernizr.js, NuGet, Knockout.js and JSON.NET as part of it. I’m very excited to announce today that we will also release the source code for ASP.NET Web API and ASP.NET Web Pages (aka Razor) under an open source license (Apache 2.0), and that we will increase the development transparency of all three projects by hosting their code repositories on CodePlex (using the new Git support announced last week). Doing so will enable a more open development model where everyone in the community will be able to engage and provide feedback on code checkins, bug-fixes, new feature development, and build and test the products on a daily basis using the most up-to-date version of the source code and tests. We will also for the first time allow developers outside of Microsoft to submit patches and code contributions that the Microsoft development team will review for potential inclusion in the products. We announced a similar open development approach with the Windows Azure SDK last December, and have found it to be a great way to build an even tighter feedback loop with developers – and ultimately deliver even better products as a result. Very importantly - ASP.NET MVC, Web API and Razor will continue to be fully supported Microsoft products that ship both standalone as well as part of Visual Studio (the same as they do today). They will also continue to be staffed by the same Microsoft developers that build them today (in fact, we have more Microsoft developers working on the ASP.NET team now than ever before). Our goal with today’s announcement is to increase the feedback loop on the products even more, and allow us to deliver even better products.  We are really excited about the improvements this will bring. Learn More You can now browse, sync and build the source tree of ASP.NET MVC, Web API, and Razor on the http://aspnetwebstack.codeplex.com web-site.  The Git repository on the site is the live RC milestone development tree that the team has been working on the last several weeks, and the tree contains both the runtime sources + tests, and is buildable and testable by anyone.  Because the binaries produced are bin-deployable, this allows you to compile your own builds and try product updates out as soon as they are checked-in. You can also now contribute directly to the development of the products by reviewing and sending feedback on code checkins, submitting bugs and helping us verify fixes as they are checked in, suggesting and giving feedback on new features as they are implemented, as well as by submitting code fixes or code contributions of your own. Note that all code submissions will be rigorously reviewed and tested by the ASP.NET MVC Team, and only those that meet an extremely high bar for both quality and design/roadmap appropriateness will be merged into the source. Summary All of us on the team are really excited about today’s announcement – it has been something we’ve been working toward for many years.  The tighter feedback loop is going to enable us to build even better products, and take ASP.NET to the next level in terms of innovation and customer focus. Thanks, Scott P.S. In addition to blogging, I use Twitter to-do quick posts and share links. My Twitter handle is: @scottgu

    Read the article

  • SQL SERVER – An Efficiency Tool to Compare and Synchronize SQL Server Databases

    - by Pinal Dave
    There is no need to reinvent the wheel if it is already invented and if the wheel is already available at ease, there is no need to wait to grab it. Here is the similar situation. I came across a very interesting situation and I had to look for an efficient tool which can make my life easier and solve my business problem. Here is the scenario. One of the developers had deleted few rows from the very important mapping table of our development server (thankfully, it was not the production server). Though it was a development server, the entire development team had to stop working as the application started to crash on every page. Think about the lost of manpower and efficiency which we started to loose.  Pretty much every department had to stop working as our internal development application stopped working. Thankfully, we even take a backup of our development server and we had access to full backup of the entire database at 6 AM morning. We do not take as a frequent backup of development server as production server (naturally!). Even though we had a full backup, the solution was not to restore the database. Think about it, there were plenty of the other operations since the last good full backup and if we restore a full backup, we will pretty much overwrite on the top of the work done by developers since morning. Now, as restoring the full backup was not an option we decided to restore the same database on another server. Once we had restored our database to another server, the challenge was to compare the table from where the database was deleted. The mapping table from where the data were deleted contained over 5000 rows and it was humanly impossible to compare both the tables manually. Finally we decided to use efficiency tool dbForge Data Compare for SQL Server from DevArt. dbForge Data Compare for SQL Server is a powerful, fast and easy to use SQL compare tool, capable of using native SQL Server backups as metadata source. (FYI we Downloaded dbForge Data Compare) Once we discovered the product, we immediately downloaded the product and installed on our development server. After we installed the product, we were greeted with the following screen. We clicked on the New Data Comparision to start our new comparison project. It brought up following screen. Here is the best part of the product, we just had to enter our database connection username and password along with source and destination details and we are done. The entire process is very simple and self intuiting. The best part was that for the source, we can either select database or even backup. This was indeed fantastic feature. Think about this, if you have a very big database, it will take long time to restore on the server. Once it is restored, you will be able to work with it. However, when you are working with dbForge Data Compare it will accept database backup as your source or destination. Once I click on the execute it brought up following screen where it displayed an excellent summary of the data compare. It has dedicated tabs for the what is changing in what table as well had details of the changed data. The best part is that, once we had reviewed the change. We click on the Synchronize button in the menu bar and it brought up following screen. You can see that the screen has very simple straight forward but very powerful features. You can generate a script to synchronize from target to source or even from source to target. Additionally, the database is a very complicated world and there are extensive options to configure various database options on the next screen. We also have the option to either generate script or directly execute the script to target server. I like to play on the safe side and I generated the script for my synchronization and later on after review I deployed the scripts on the server. Well, my team and we were able to get going from our disaster in less than 10 minutes. There were few people in our team were indeed disappointed as they were thinking of going home early that day but in less than 10 minutes they had to get back to work. There are so many other features in  dbForge Data Compare for SQL Server, I am already planning to make this product company wide recommended product for Data Compare tool. Hats off to the team who have build this product. Here are few of the features salient features of the dbForge Data Compare for SQL Server Perform SQL Server database comparison to detect changes Compare SQL Server backups with live databases Analyze data differences between two databases Synchronize two databases that went out of sync Restore data of a particular table from the backup Generate data comparison reports in Excel and HTML formats Copy look-up data from development database to production Automate routine data synchronization tasks with command-line interface Go Ahead and Download the dbForge Data Compare for SQL Server right away. It is always a good idea to get familiar with the important tools before hand instead of learning it under pressure of disaster. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, SQL Utility, T SQL, Technology

    Read the article

  • Mono for Android Book has been Released!!!!!

    - by Wallym
    If I understand things correctly, and I make no guarantees that I do, our Mono for Android book has been RELEASED!  I'm not quite sure what this means, but my guess is that that it has been printed and is being shipped to various book sellers.So, if you have pre-ordered a copy, its now up to Amazon to send it to you.  Its fully out of my control, Wrox, Wiley, as well as everyone but Amazon.If you haven't bought a copy already, why?  Seriously, go order 8-10 copies for the ones you love.  They'll make great romantic gifts for the ones you love.  Just think at the look on the other person's face when you give them a copy of our book. Here's a little about the book:The wait is over! For the millions of .NET/C# developers who have been eagerly awaiting the book that will guide them through the white-hot field of Android application programming, this is the book. As the first guide to focus on Mono for Android, this must-have resource dives into writing applications against Mono with C# and compiling executables that run on the Android family of devices.Putting the proven Wrox Professional format into practice, the authors provide you with the knowledge you need to become a successful Android application developer without having to learn another programming language. You'll explore screen controls, UI development, tables and layouts, and MonoDevelop as you become adept at developing Android applications with Mono for Android.Develop Android apps using tools you already know—C# and .NETAimed at providing readers with a thorough, reliable resource that guides them through the field of Android application programming, this must-have book shows how to write applications using Mono with C# that run on the Android family of devices. A team of authors provides you with the knowledge you need to become a successful Android application developer without having to learn another programming language. You'll explore screen controls, UI development, tables and layouts, and MonoDevelop as you become adept at planning, building, and developing Android applications with Mono for Android.Professional Android Programming with Mono for Android and .NET/C#:Shows you how to use your existing C# and .NET skills to build Android appsDetails optimal ways to work with data and bind data to controlsExplains how to program with Android device hardwareDives into working with the file system and application preferencesDiscusses how to share code between Mono for Android, MonoTouch, and Windows® Phone 7Reveals tips for globalizing your apps with internationalization and localization supportCovers development of tablet apps with Android 4Wrox Professional guides are planned and written by working programmers to meet the real-world needs of programmers, developers, and IT professionals. Focused and relevant, they address the issues technology professionals face every day. They provide examples, practical solutions, and expert education in new technologies, all designed to help programmers do a better job.Now, go buy a bunch of copies!!!!!If you are interested in iPhone and Android and would like to get a little more knowledgeable in the area of development, you can purchase the 3 pack of books from Wrox on Mobile Development with Mono.  This will cover MonoTouch, Mono for Android, and cross platform methods for using both tools.  A great package in and of itself.  The name of that package is: Wrox Cross Platform Android and iOS Mobile Development Three-Pack 

    Read the article

  • Configure Nagios To Alert Depending On Host Group That Service Alert Originates From

    - by StrangeWill
    So my setup: Services are shared between all hosts (CPU/RAM/Disk/Services). Hosts are split into two main groups: "Production" and "Development". We have two contact groups: "Production" and "Development". Lets say my development SQL server runs low on RAM, I want it to only alert those in "Development" contact group (this service is of course assigned to a host in the "Development" host group, using the shared RAM monitoring service). I'm pretty much stumped on this... I can't configure it at the service level (they're shared there), and I can't seem to get escalations to do it for me either... Do I need to use service groups along with escalations and bite the bullet on building that list? Or am I missing something stupidly simple? I'm using Centreon for configuration if that helps.

    Read the article

  • What Agile Model do you use at Work?

    - by Kyle Rozendo
    I am looking to start pushing for more Agile processes to be brought into play in the work place and do my best to outlaw cowboy coding as much as possible. I understand many of the different models and am just looking to see which model has the higher uptake (or which parts of the model as well), and in what industry it is being used. Extreme Programming (XP) Adaptive Software Development (ASD) Scrum Dynamic Systems Development Model (DSDM) Crystal Feature Driven Development (FDD) Lean Software Development (LSD) Agile Modelling (AM) Agile Unified Process (AUP) Kanban If you care to add to your answer with comments about what you don't like, do like or have tried and it hadn't worked, that would also be appreciated.

    Read the article

  • Fixing up Configurations in BizTalk Solution Files

    - by Elton Stoneman
    Just a quick one this, but useful for mature BizTalk solutions, where over time the configuration settings can get confused, meaning Debug configurations building in Release mode, or Deployment configurations building in Development mode. That can cause issues in the build which aren't obvious, so it's good to fix up the configurations. It's time-consuming in VS or in a text editor, so this bit of PowerShell may come in useful - just substitute your own solution path in the $path variable: $path = 'C:\x\y\z\x.y.z.Integration.sln' $backupPath = [System.String]::Format('{0}.bak', $path) [System.IO.File]::Copy($path, $backupPath, $True) $sln = [System.IO.File]::ReadAllText($path)   $sln = $sln.Replace('.Debug|.NET.Build.0 = Deployment|.NET', '.Debug|.NET.Build.0 = Development|.NET') $sln = $sln.Replace('.Debug|.NET.Deploy.0 = Deployment|.NET', '.Debug|.NET.Deploy.0 = Development|.NET') $sln = $sln.Replace('.Debug|Any CPU.ActiveCfg = Deployment|.NET', '.Debug|Any CPU.ActiveCfg = Development|.NET') $sln = $sln.Replace('.Deployment|.NET.ActiveCfg = Debug|Any CPU', '.Deployment|.NET.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Any CPU.ActiveCfg = Debug|Any CPU', '.Deployment|Any CPU.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Any CPU.Build.0 = Debug|Any CPU', '.Deployment|Any CPU.Build.0 = Release|Any CPU') $sln = $sln.Replace('.Deployment|Mixed Platforms.ActiveCfg = Debug|Any CPU', '.Deployment|Mixed Platforms.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Deployment|Mixed Platforms.Build.0 = Debug|Any CPU', '.Deployment|Mixed Platforms.Build.0 = Release|Any CPU') $sln = $sln.Replace('.Deployment|.NET.ActiveCfg = Debug|Any CPU', '.Deployment|.NET.ActiveCfg = Release|Any CPU') $sln = $sln.Replace('.Debug|.NET.ActiveCfg = Deployment|.NET', '.Debug|.NET.ActiveCfg = Development|.NET')   [System.IO.File]::WriteAllText($path, $sln) The script creates a backup of the solution file first, and then fixes up all the configs to use the correct builds. It's a simple search and replace list, so if there are any patterns that need to be added let me know and I'll update the script. A RegEx replace would be neater, but when it comes to hacking solution files, I prefer the conservative approach of knowing exactly what you're changing.

    Read the article

  • The Incremental Architect&acute;s Napkin - #2 - Balancing the forces

    - by Ralf Westphal
    Originally posted on: http://geekswithblogs.net/theArchitectsNapkin/archive/2014/06/02/the-incremental-architectacutes-napkin---2---balancing-the-forces.aspxCategorizing requirements is the prerequisite for ecconomic architectural decisions. Not all requirements are created equal. However, to truely understand and describe the requirement forces pulling on software development, I think further examination of the requirements aspects is varranted. Aspects of Functionality There are two sides to Functionality requirements. It´s about what a software should do. I call that the Operations it implements. Operations are defined by expressions and control structures or calls to frameworks of some sort, i.e. (business) logic statements. Operations calculate, transform, aggregate, validate, send, receive, load, store etc. Operations are about behavior; they take input and produce output by considering state. I´m not using the term “function” here, because functions - or methods or sub-programs - are not necessary to implement Operations. Functions belong to a different sub-aspect of requirements (see below). Operations alone are not enough, though, to make a customer happy with regard to his/her Functionality requirements. Only correctly implemented Operations provide full value. This should make clear, why testing is so important. And not just manual tests during development of some operational feature, but automated tests. Because only automated tests scale when over time the number of operations increases. Without automated tests there is no guarantee formerly correct operations are still correct after more got added. To retest all previous operations manually is infeasible. So whoever relies just on manual tests is not really balancing the two forces Operations and Correctness. With manual tests more weight is put on the side of the scale of Operations. That might be ok for a short period of time - but in the long run it will bite you. You need to plan for Correctness in the long run from the first day of your project on. Aspects of Quality As important as Functionality is, it´s not the driver for software development. No software has ever been written to just implement some operation in code. We don´t need computers just to do something. All computers can do with software we can do without them. Well, at least given enough time and resources. We could calculate the most complex formulas without computers. We could do auctions with millions of people without computers. The only reason we want computers to help us with this and a million other Operations is… We don´t want to wait for the results very long. Or we want less errors. Or we want easier accessability to complicated solutions. So the main reason for customers to buy/order software is some Quality. They want some Functionality with a higher Quality (e.g. performance, scalability, usability, security…) than without the software. But Qualities come in at least two flavors: Most important are Primary Qualities. That´s the Qualities software truely is written for. Take an online auction website for example. Its Primary Qualities are performance, scalability, and usability, I´d say. Auctions should come within reach of millions of people; setting up an auction should be very easy; finding a suitable auction and bidding on it should be as fast as possible. Only if those Qualities have been implemented does security become relevant. A secure auction website is important - but not as important as a fast auction website. Nobody would want to use the most secure auction website if it was unbearably slow. But there would be people willing to use the fastest auction website even it was lacking security. That´s why security - with regard to online auction software - is not a Primary Quality, but just a Secondary Quality. It´s a supporting quality, so to speak. It does not deliver value by itself. With a password manager software this might be different. There security might be a Primary Quality. Please get me right: I don´t want to denigrate any Quality. There´s a long list of non-functional requirements at Wikipedia. They are all created equal - but that does not mean they are equally important for all software projects. When confronted with Quality requirements check with the customer which are primary and which are secondary. That will help to make good economical decisions when in a crunch. Resources are always limited - but requirements are a bottomless ocean. Aspects of Security of Investment Functionality and Quality are traditionally the requirement aspects cared for most - by customers and developers alike. Even today, when pressure rises in a project, tunnel vision will focus on them. Any measures to create and hold up Security of Investment (SoI) will be out of the window pretty quickly. Resistance to customers and/or management is futile. As long as SoI is not placed on equal footing with Functionality and Quality it´s bound to suffer under pressure. To look closer at what SoI means will help to become more conscious about it and make customers and management aware of the risks of neglecting it. SoI to me has two facets: Production Efficiency (PE) is about speed of delivering value. Customers like short response times. Short response times mean less money spent. So whatever makes software development faster supports this requirement. This must not lead to duct tape programming and banging out features by the dozen, though. Because customers don´t just want Operations and Quality, but also Correctness. So if Correctness gets compromised by focussing too much on Production Efficiency it will fire back. Customers want PE not just today, but over the whole course of a software´s lifecycle. That means, it´s not just about coding speed, but equally about code quality. If code quality leads to rework the PE is on an unsatisfactory level. Also if code production leads to waste it´s unsatisfactory. Because the effort which went into waste could have been used to produce value. Rework and waste cost money. Rework and waste abound, however, as long as PE is not addressed explicitly with management and customers. Thanks to the Agile and Lean movements that´s increasingly the case. Nevertheless more could and should be done in many teams. Each and every developer should keep in mind that Production Efficiency is as important to the customer as Functionality and Quality - whether he/she states it or not. Making software development more efficient is important - but still sooner or later even agile projects are going to hit a glas ceiling. At least as long as they neglect the second SoI facet: Evolvability. Delivering correct high quality functionality in short cycles today is good. But not just any software structure will allow this to happen for an indefinite amount of time.[1] The less explicitly software was designed the sooner it´s going to get stuck. Big ball of mud, monolith, brownfield, legacy code, technical debt… there are many names for software structures that have lost the ability to evolve, to be easily changed to accomodate new requirements. An evolvable code base is the opposite of a brownfield. It´s code which can be easily understood (by developers with sufficient domain expertise) and then easily changed to accomodate new requirements. Ideally the costs of adding feature X to an evolvable code base is independent of when it is requested - or at least the costs should only increase linearly, not exponentially.[2] Clean Code, Agile Architecture, and even traditional Software Engineering are concerned with Evolvability. However, it seems no systematic way of achieving it has been layed out yet. TDD + SOLID help - but still… When I look at the design ability reality in teams I see much room for improvement. As stated previously, SoI - or to be more precise: Evolvability - can hardly be measured. Plus the customer rarely states an explicit expectation with regard to it. That´s why I think, special care must be taken to not neglect it. Postponing it to some large refactorings should not be an option. Rather Evolvability needs to be a core concern for every single developer day. This should not mean Evolvability is more important than any of the other requirement aspects. But neither is it less important. That´s why more effort needs to be invested into it, to bring it on par with the other aspects, which usually are much more in focus. In closing As you see, requirements are of quite different kinds. To not take that into account will make it harder to understand the customer, and to make economic decisions. Those sub-aspects of requirements are forces pulling in different directions. To improve performance might have an impact on Evolvability. To increase Production Efficiency might have an impact on security etc. No requirement aspect should go unchecked when deciding how to allocate resources. Balancing should be explicit. And it should be possible to trace back each decision to a requirement. Why is there a null-check on parameters at the start of the method? Why are there 5000 LOC in this method? Why are there interfaces on those classes? Why is this functionality running on the threadpool? Why is this function defined on that class? Why is this class depending on three other classes? These and a thousand more questions are not to mean anything should be different in a code base. But it´s important to know the reason behind all of these decisions. Because not knowing the reason possibly means waste and having decided suboptimally. And how do we ensure to balance all requirement aspects? That needs practices and transparency. Practices means doing things a certain way and not another, even though that might be possible. We´re dealing with dangerous tools here. Like a knife is a dangerous tool. Harm can be done if we use our tools in just any way at the whim of the moment. Over the centuries rules and practices have been established how to use knifes. You don´t put them in peoples´ legs just because you´re feeling like it. You hand over a knife with the handle towards the receiver. You might not even be allowed to cut round food like potatos or eggs with it. The same should be the case for dangerous tools like object-orientation, remote communication, threads etc. We need practices to use them in a way so requirements are balanced almost automatically. In addition, to be able to work on software as a team we need transparency. We need means to share our thoughts, to work jointly on mental models. So far our tools are focused on working with code. Testing frameworks, build servers, DI containers, intellisense, refactoring support… That´s all nice and well. I don´t want to miss any of that. But I think it´s not enough. We´re missing mental tools, tools for making thinking and talking about software (independently of code) easier. You might think, enough of such tools already exist like all those UML diagram types or Flow Charts. But then, isn´t it strange, hardly any team is using them to design software? Or is that just due to a lack of education? I don´t think so. It´s a matter value/weight ratio: the current mental tools are too heavy weight compared to the value they deliver. So my conclusion is, we need lightweight tools to really be able to balance requirements. Software development is complex. We need guidance not to forget important aspects. That´s like with flying an airplane. Pilots don´t just jump in and take off for their destination. Yes, there are times when they are “flying by the seats of their pants”, when they are just experts doing thing intuitively. But most of the time they are going through honed practices called checklist. See “The Checklist Manifesto” for very enlightening details on this. Maybe then I should say it like this: We need more checklists for the complex businss of software development.[3] But that´s what software development mostly is about: changing software over an unknown period of time. It needs to be corrected in order to finally provide promised operations. It needs to be enhanced to provide ever more operations and qualities. All this without knowing when it´s going to stop. Probably never - until “maintainability” hits a wall when the technical debt is too large, the brownfield too deep. Software development is not a sprint, is not a marathon, not even an ultra marathon. Because to all this there is a foreseeable end. Software development is like continuously and foreever running… ? And sometimes I dare to think that costs could even decrease over time. Think of it: With each feature a software becomes richer in functionality. So with each additional feature the chance of there being already functionality helping its implementation increases. That should lead to less costs of feature X if it´s requested later than sooner. X requested later could stand on the shoulders of previous features. Alas, reality seems to be far from this despite 20+ years of admonishing developers to think in terms of reusability.[1] ? Please don´t get me wrong: I don´t want to bog down the “art” of software development with heavyweight practices and heaps of rules to follow. The framework we need should be lightweight. It should not stand in the way of delivering value to the customer. It´s purpose is even to make that easier by helping us to focus and decreasing waste and rework. ?

    Read the article

  • How to replace the SharePoint date calendar control with more user friendly jQuery calendar control

    - by ybbest
    When you use the SharePoint date and time type for date of birth field, you will notice that the calendar control is extremely non-user-friendly. You can only navigate month by month as shown below. To resolve the issue, you can customize the list form page using SharePoint designer and replace the OOB calendar control with popular jQuery control. The solution works for both SharePoint 2010,2013 and office365. Here are the steps for how to achieve this. 1. Open SharePoint designer and create a New List Form called customNew and set as default form for the selected type. 2. Open style library in file explorer and copy jQuery and jQuery UI files into the style library in SharePoint site. You can download the jQuery and jQuery UI from the web and the content of the contactPersonCustomNewForm.js is as below. I use the dd/mm/yy format as my locale in Regional Settings is English(New Zealand). You need to change this if you live in another country with different date format $(document).ready(function() { $("img#ctl00_m_g_540b9a50_52dc_4400_a58d_1db99555fddf_ff41_ctl00_ctl00_DateTimeField_DateTimeFieldDateDatePickerImage").parent().hide(); $("img#ctl00_m_g_540b9a50_52dc_4400_a58d_1db99555fddf_ff41_ctl00_ctl00_DateTimeField_DateTimeFieldDateDatePickerImage").hide(); $("input#ctl00_m_g_540b9a50_52dc_4400_a58d_1db99555fddf_ff41_ctl00_ctl00_DateTimeField_DateTimeFieldDate").datepicker({ changeMonth:true, changeYear:true, showOn: "button", buttonImage: "/_layouts/images/calendar.gif", buttonImageOnly: true, defaultDate:"01/01/1970", yearRange: "c-20:c+20", dateFormat: "dd/mm/yy" }); }); In order to get the image and textbox selector above , you can open IE developer toolbar(click F12) and find the control ID as below: 3. Open SharePoint designer and edit the newly created New List Form customNew.aspx in advance mode. Then copy and paste the following links in the PlaceHolderAdditionalPageHead. <SharePoint:CssRegistration name="<%$SPUrl:~SiteCollection/Style Library/themes/ui-lightness/jquery-ui.css%>" runat="server"/> <SharePoint:ScriptLink language="javascript" name="~sitecollection/Style Library/jquery-1.10.2.js" Defer="false" runat="server"/> <SharePoint:ScriptLink language="javascript" name="~sitecollection/Style Library/jquery-ui-1.10.4.custom.min.js" Defer="false" runat="server"/> <SharePoint:ScriptLink language="javascript" name="~sitecollection/Style Library/contactPersonCustomNewForm.js" Defer="false" runat="server"/>   4. Now go to the list and click add, you will see the new calendar control as shown below

    Read the article

< Previous Page | 389 390 391 392 393 394 395 396 397 398 399 400  | Next Page >