Search Results

Search found 26285 results on 1052 pages for 'grant back'.

Page 68/1052 | < Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >

  • Data Driven MSTest: DataRow is always null

    - by David Back
    I am having a problem using Visual Studio data driven testing. I have tried to deconstruct this to the simplest example. I am using Visual Studio 2012. I create a new unit test project. I am referencing system data. My code looks like this: namespace UnitTestProject1 { [TestClass] public class UnitTest1 { [DeploymentItem(@"OrderService.csv")] [DataSource("Microsoft.VisualStudio.TestTools.DataSource.CSV", "OrderService.csv", "OrderService#csv", DataAccessMethod.Sequential)] [TestMethod] public void TestMethod1() { try { Debug.WriteLine(TestContext.DataRow["ID"]); } catch (Exception ex) { Assert.Fail(); } } public TestContext TestContext { get; set; } } } I have a very small csv file that I have set the Build Options to to 'Content' and 'Copy Always'. I have added a .testsettings file to the solution, and set enable deployment, and added the csv file. I have tried this with and without |DataDirectory|, and with/without a full path specified (the same path that I get with Environment.CurrentDirectory). I've tried variations of "../" and "../../" just in case. Right now the csv is at the project root level, same as the .cs test code file. I have tried variations with xml as well as csv. TestContext is not null, but DataRow always is. I have not gotten this to work despite a lot of fiddling with it. I'm not sure what I'm doing wrong. Does mstest create a log anywhere that would tell me if it is failing to find the csv file, or what specific error might be causing DataRow to fail to populate? I have tried the following csv files: ID 1 2 3 4 and ID, Whatever 1,0 2,1 3,2 4,3 So far, no dice.

    Read the article

  • Access Control Lists basics

    - by vtortola
    Hi, I'm gonna add authorization, user and groups management to my application, basically... you will can define a set of permissions for a concrete user or group. For example, you could specify whom can use a concrete resource. So I want to ensure that my assumptions about ACLs are right: A basic rule could be "Grant", "Deny", "NoSet". User permissions have priority over group permissions. "Deny" statement has priority over "Grant". For example, user "u1" belongs to group "A", the resource "X" has this ACL "u1:grant,A:deny" user "u1" should be able to access the resource, shouldn't it? If a resource has no ACL set... does it means that anyone can access it? should I provide a default ACL? Any document about ACL in a general way? Cheers.

    Read the article

  • $.getJson> $.each returns undefined

    - by Der Sep
    function getData(d){ Back = new Object(); $.getJSON('../do.php?', function(response){ if(response.type == 'success'){ Back = { "type" : "success", "content" : "" }; $.each(response.data, function(data){ Back.content +='<div class="article"><h5>'+data.title+'</h5>' Back.content +='<div class="article-content">'+data.content+'</div></div>'; }); } else{ Back = {"type" : "error" }; } return Back; }); } console.log(getData()); is returning undefined! why?

    Read the article

  • Synchronizing an ERWin model with a Visual Studio 2008 GDR 2/2010 db project

    - by Grant Back
    I am looking for options to get our vast collection of DB objects across many DBs into source control (TFS 2010). Once we succeed here, we will work toward generating our alter scripts for a particular DB change via TFS build. The problem is, our data architecture group is responsible for maintaining the DB objects (excluding SPs), and they work within a model centric process, via ERWin. What this means, is that they maintain the DBs via ERWin models, and generate alters from them that are used to release changes. In order to achieve our goal of getting the DB objects (not just the ERWin models) into TFS, I believe the best option is to do this via Visual Studio DB projects. From what I can tell, there is very little urgency for CA to continue supporting an integration between ERWin and Visual Studio, that no longer works as of Visual Studio 2008 DB Ed. GDR. If I have been mislead in this regard, please feel free to set me straight. One potential solution is to: Perform changes in the ERWin model. Take the alter script generated from ERWin, and import the script into the appropriate Visual Studio DB project, updating the objects in the in the DB project Check the changed objects in the DB project into TFS. TFS Build executes to generate the alter scripts that will be used to push the changes through our release process. My question is, is this solution viable, or are there any other options?

    Read the article

  • dojo.io.iframe erroring when uploading a file

    - by Grant Collins
    Hi, Hit an interesting problem today when trying to upload an image file < 2MB using dojo.io.iframe. My function to process the form is called, but before the form is posted to the server I am getting the following error: TypeError: ifd.getElementsByTagName("textarea")[0] is undefined My function that is used to action the post of the form is : function uploadnewlogo(){ var logoDiv = dojo.byId('userlogo'); var logoMsg = dojo.byId('uploadmesg'); //prep the io frame to send logo data. dojo.io.iframe.send({ url: "/users/profile/changelogo/", method: "post", handleAs: "text", form: dojo.byId('logoUploadFrm'), handle: function(data,ioArgs){ var response = dojo.fromJson(data); if(response.status == 'success'){ //first clear the image //dojo.style(logoDiv, "display", "none"); logoDiv.innerHTML = ""; //then we update the image logoDiv.innerHTML = response.image; }else if(response.status == 'error'){ logoMsg.innerHTML = data.mesg; }else{ logoMsg.innerHTML = '<div class="error">Whoops! We can not process your image.</div>'; } }, error: function(data, ioArgs){ logoMsg.innerHTML = '<div class="error">' + data + '</div>'; } }); } The form is very basic with just a File input component and a simple button that calls this bit of javascript and dojo. I've got very similar code in my application that uploads word/pdf documents and that doesn't error, but for some reason this does. Any ideas or pointers on what I should try to get this to work without errors? Oh I'm using php and Zend framework for the backend if that has anything to do with it, but I doubt it as it's not even hitting the server before it fails. Many thanks, Grant

    Read the article

  • Add Zend_Navigation to the View with old legacy bootstrap

    - by Grant Collins
    Hi, I've been struggling with Zend_Navigation all weekend, and now I have another problem, which I believe has been the cause of a lot of my issues. I am trying to add Zend_Navigation to a legacy 1.7.6 Zend Framework application, i've updated the Zend Library to 1.9.0 and updated the bootstrap to allow this library update. The problem is that I don't know how, and the examples show the new bootstrap method of how to add the Navigation object to the view, I've tried this: //initialise the application layouts with the MVC helpers $layout = Zend_Layout::startMvc(array('layoutPath' => '../application/layouts')); $view = $layout->getView(); $configNav = new Zend_Config_Xml('../application/config/navigation.xml', 'navigation'); $navigation = new Zend_Navigation($configNav); $view->navigation($navigation); $viewRenderer = new Zend_Controller_Action_Helper_ViewRenderer(); $viewRenderer->setView($view); This seems to run through fine, but when I go to use the breadcrumb view helper in my layout, it errors with: Strict Standards: Creating default object from empty value in C:\www\moobia\development\website\application\modules\employers\controllers\IndexController.php on line 27 This is caused by the following code in the init() function of my controller. $uri = $this->_request->getPathInfo(); $activeNav = $this->view->navigation()->findByUri($uri); <- this is null when called $activeNav->active = true; I believe it's because the Zend_Navigation object is not in the view. I would look at migrating the bootstrap to the current method, but at present I am running out of time for a release. Thanks, Grant

    Read the article

  • NAnt errors when generating assembly info after project is upgraded to VS2010

    - by Grant Palin
    I have a project I recently upgraded to VS2010 - the project/solution files are updated, but I'm still targeting .NET 3.5. Until now, my standard NAnt build script has not given me any trouble. However, it appears that after updating the project, and updating the NAnt config to be aware of the new tooling, I am now receiving an error when autogenerating assembly information, which fails the build. The relevant build task is below: <asminfo output="${dir.src}\${file.commonAssemblyInfo}" language="${project.codeLanguage}"> <imports> <import namespace="System.Reflection" /> </imports> <attributes> <attribute type="AssemblyVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyFileVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyInformationalVersionAttribute" value="${project.fullversion}" /> <attribute type="AssemblyCopyrightAttribute" value="${assembly.copyright}" /> <attribute type="AssemblyCompanyAttribute" value="${assembly.company}" /> <attribute type="AssemblyConfigurationAttribute" value="${project.config}" /> <attribute type="AssemblyTrademarkAttribute" value="${assembly.trademark}" /> <attribute type="AssemblyProductAttribute" value="${assembly.product}" /> </attributes> </asminfo> The error is highlighted for the first line of the asminfo task. It reads: AssemblyInfo file 'C:\Users\Grant\Projects\VisualStudio\Checklist\src\CommonAssemblyInfo.cs' could not be generated. This method implicitly uses CAS policy, which has been obsoleted by the .NET Framework. In order to enable CAS policy for compatibility reasons, please use the NetFx40_LegacySecurityPolicy configuration switch. Please see http://go.microsoft.com/fwlink/?LinkID=155570 for more information. I've gathered so far that this is something new to .NET 4. Has anyone had to address this error before? Does anyone know what it is about asminfo that may be triggering the error?

    Read the article

  • How to change Arrow Keys Behavior?

    - by SO give me back my rep
    Hi, I am doing a cool menu (sorta XMB) to give a fresh touch to my app... I add all of the elements on the menu programatically via DB. the menu is designed for easy use with arrows keys but I have encountered a major problem!!! by default when I press the arrow keys they only change the focus based on the tabindex and what I need is to change focus based on position of the controls not on their tabindex hope it is clear... see pic!!! so, Is there any way to do this?

    Read the article

  • Why display SELECT value is not changing?

    - by I'll-Be-Back
    When the page load, I expected <option value="B">B</option> value to change to red. It didn't work. Why? jQuery $(document).ready(function () { $('[name=HeaderFields] option[value="B"]').val('red'); } Dropdown: <select name="HeaderFields" style="width:60px"> <option value="A">A</option> <option value="B">B</option> <option value="C">C</option> </select>

    Read the article

  • How can I change what happens when "enter" key is pressed on a DataGridView?

    - by SO give me back my rep
    when I am editing a cell and press enter the next row is automatically selected, I want to stay with the current row... I want to happen nothing except the EndEdit. I have this: private void dtgProductos_CellEndEdit(object sender, DataGridViewCellEventArgs e) { dtgProductos[e.ColumnIndex, e.RowIndex].Selected = true; //this line is not working var index = dtgProductos.SelectedRows[0].Cells.IndexOf(dtgProductos.SelectedRows[0].Cells[e.ColumnIndex]); switch (index) { case 2: { dtgProductos.SelectedRows[0].Cells[4].Selected = true; dtgProductos.BeginEdit(true); } break; case 4: { dtgProductos.SelectedRows[0].Cells[5].Selected = true; dtgProductos.BeginEdit(true); } break; case 5: { btnAddProduct.Focus(); } break; default: break; } } so when I edit a row that is not the last one I get this error: Operation is not valid because it results in a reentrant call to the SetCurrentCellAddressCore function.

    Read the article

  • removing phone number from a document.

    - by Grant Collins
    Hi, I've got a challenge that I am hoping that the SO community is able to help me with. I trying to parse a lot of html documents in my PHP application to remove personal details, such as names, addresses and phone numbers. I can remove most of these details without too much trouble, however the phone number is a real problem for me. My idea is to take the text from these documents and the use a regex to identify the phone numbers and replace them with another value such as 'xxxx'. I've got 2 regex that I am using one for UK landline numbers and one for UK cell/mobile numbers. However when I try and run them against the text it just returns an empty string. I am using the following preg_replace code: $pattens = array( '/^(((\+44\s?\d{4}|\(?0\d{4}\)?)\s?\d{3}\s?\d{3})|((\+44\s?\d{3}|\(?0\d{3}\)?)\s?\d{3}\s?\d{4})|((\+44\s?\d{2}|\(?0\d{2}\)?)\s?\d{4}\s?\d{4}))(\s?\#(\d{4}|\d{3}))?$/', '/^(\+44\s?7\d{3}|\(?07\d{3}\)?)\s?\d{3}\s?\d{3}$/' ); $replace = array('xxxxx', 'xxxxx'); //do the search for the numbers. $updatedContents = preg_replace($pattens, $replace, $htmlContents); At the moment this is causing me a lot of head scratching as I thought that I had this nailed, but at the moment I can't see what's wrong?? I am sure that it is something really simple. Thanks, Grant

    Read the article

  • Create database in Shell Script - convert from PHP

    - by snaken
    I have the following PHP code that i use to create a databaase and grant permissions to a user: $con = mysql_connect("IP.ADDRESS","user","pass"); mysql_query("CREATE DATABASE ".$dbuser."",$con)or die(mysql_error()); mysql_query("grant all on ".$dbuser.".* to ".$dbname." identified by '".$dbpass."'",$con) or die(mysql_error()); I want to perform these same actions but from within a shell script. Is it just something like this: MyUSER="user" MyPASS="pass" MYSQL -u $MyUSER -h -p$MyPASS -Bse "CREATE DATABASE $dbuser;' MYSQL -u $MyUSER -h -p$MyPASS -Bse "GRANT ALL ON ${DBUSER}.* to $DBNAME identified by $DBPASS;"

    Read the article

  • Converting to Visual Studio 2008 and .NET 3.5

    - by Grant Back
    The process of converting from Visual Studio .NET 2003 to Visual Studio 2008 is satisfyingly start forward. I thought it would be worth asking a couple of questions though: 1) Are there any 'gotchas' with this conversion process that we should be aware of? 2) Same question goes for upgrading the .NET Framework from 1.1 to 3.5? Thanks.

    Read the article

  • Stop writing blank line at the end of CSV file (using MATLAB)

    - by Grant M.
    Hello all ... I'm using MATLAB to open a batch of CSV files containing column headers and data (using the 'importdata' function), then I manipulate the data a bit and write the headers and data to new CSV files using the 'dlmwrite' function. I'm using the '-append' and 'newline' attributes of 'dlmwrite' to add each line of text/data on a new line. Each of my new CSV files has a blank line at the end, whereas this blank line was not there before when I read in the data ... and I'm not using 'newline' on my final call of 'dlmwrite'. Does anyone know how I can keep from writing this blank line to the end of my CSV files? Thanks for your help, Grant EDITED 5/18/10 1:35PM CST - Added information about code and text file per request ... you'll notice after performing the procedure below that there appears to be a carriage return at the end of the last line in the new text file. Consider a text file named 'textfile.txt' that looks like this: Column1, Column2, Column3, Column4, Column 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 1, 2, 3, 4, 5 Here's a sample of the code I am using: % import data importedData = importdata('textfile.txt'); % manipulate data importedData.data(:,1) = 100; % store column headers into single comma-delimited % character array (for easy writing later) columnHeaders = importedData.textdata{1}; for counter = 2:size(importedData.textdata,2) columnHeaders = horzcat(columnHeaders,',',importedData.textdata{counter}); end % write column headers to new file dlmwrite('textfile_updated.txt',columnHeaders,'Delimiter','','newline','pc') % append all but last line of data to new file for dataCounter = 1:(size(importedData.data,2)-1) dlmwrite('textfile_updated.txt',importedData.data(dataCounter,:),'Delimiter',',','newline','pc','-append') end % append last line of data to new file, not % creating new line at end dlmwrite('textfile_updated.txt',importedData.data(end,:),'Delimiter',',','-append')

    Read the article

  • Linq to Entities strange deploying behavior.

    - by SO give me back my rep
    Hi I started building apps with this technology and I am facing a weird problem... on some machines I need to add theese lines to the app.config to get to work: <system.data> <DbProviderFactories> <add name="MySQL Data Provider" invariant="MySql.Data.MySqlClient" description=".Net Framework Data Provider for MySQL" type="MySql.Data.MySqlClient.MySqlClientFactory, MySql.Data, Version=6.3.0.0, Culture=neutral, PublicKeyToken=c5687fc88969c44d" /> </DbProviderFactories> </system.data> while in other machines it runs well without theese lines.... the thing is that when I add theese lines the app wont run on machines that did not needed theese lines in the firs place, and I would like not to publish to versions of the app, is there a way to solve this?

    Read the article

  • Simple question about javascript history.go

    - by Camran
    I have a classifieds website. In every classified, there is a back link which simply takes the browser back one step. This is because when users search classifieds, and click on one to view it, they can easily go back with a link also (instead of only the browser back button). Here is the problem, if the classified is entered directly into the adress bar of a browser, or if somebody bookmarked a classified, then this back-link would take them someplace else... Is there any way of making sure that the previous page is a certain page (index.php in my case)? This way I would only display the back link if the previous page was index.php... Thanks

    Read the article

  • Computer will not boot - disk read error - cannot boot from HD or DVD

    - by Grant Palin
    This is a 3 year-old system: HP a1640n. There have been no issues with it in the past. I added a video card 2 years ago, and more memory 1 year ago, both without issues. There haven't been any recent hardware changes. I did install Win7 in Oct., but there were no issues with that either. I used the computer fine two nights ago, and turned it off. Yesterday, I tried to turn it on, and got the error: "A Disk Read Error Occurred. Press CTRL ALT DEL to restart" So I restart, see the initial start screen (HP) and enter the BIOS. The hard drive and dvd drive appear to be listed, but the names are gibberish text. I tried putting a Windows disk in the dvd drive, and continued with the boot, but the disk did not get recognized. Even though the BIOS was set to check for optical media before the hard drive. Back to the error screen. If the computer would boot from a cd or dvd, I would just figure the hard drive needed replacing. But both being problematic worries me. Is this a matter of replacing both the hard drive and dvd drive, or might it be an indication of a bigger problem? Thanks for any advice.

    Read the article

  • My server's been hacked EMERGENCY

    - by Grant unwin
    I'm on my way into work at 9.30 p.m. on a Sunday because our server has been compromised somehow and was resulting in a DOS attack on our provider. The servers access to the Internet has been shut down which means over 5-600 of our clients sites are now down. Now this could be an FTP hack, or some weakness in code somewhere. I'm not sure till I get there. How can I track this down quickly? We're in for a whole lot of litigation if I don't get the server back up ASAP. Any help is appreciated. UPDATE Thanks to everyone for your help. Luckily I WASN'T the only person responsible for this server, just the nearest. We managed to resolve this problem, although it may not apply to many others in a different situation. I'll detail what we did. We unplugged the server from the net. It was performing (attempting to perform) a Denial Of Service attack on another server in Indonesia, and the guilty party was also based there. We firstly tried to identify where on the server this was coming from, considering we have over 500 sites on the server, we expected to be moonlighting for some time. However, with SSH access still, we ran a command to find all files edited or created in the time the attacks started. Luckily, the offending file was created over the winter holidays which meant that not many other files were created on the server at that time. We were then able to identify the offending file which was inside the uploaded images folder within a ZenCart website. After a short cigarette break we concluded that, due to the files location, it must have been uploaded via a file upload facility that was inadequetly secured. After some googling, we found that there was a security vulnerability that allowed files to be uploaded, within the ZenCart admin panel, for a picture for a record company. (The section that it never really even used), posting this form just uploaded any file, it did not check the extension of the file, and didn't even check to see if the user was logged in. This meant that any files could be uploaded, including a PHP file for the attack. We secured the vulnerability with ZenCart on the infected site, and removed the offending files. The job was done, and I was home for 2 a.m. The Moral - Always apply security patches for ZenCart, or any other CMS system for that matter. As when security updates are released, the whole world is made aware of the vulnerability. - Always do backups, and backup your backups. - Employ or arrange for someone that will be there in times like these. To prevent anyone from relying on a panicy post on Server Fault. Happy servering!

    Read the article

  • Implementing History Support using jQuery for AJAX websites built on asp.net AJAX

    - by anil.kasalanati
    Problem Statement: Most modern day website use AJAX for page navigation and gone are the days of complete HTTP redirection so it is imperative that we support back and forward buttons on the browser so that end users navigation is not broken. In this article we discuss about solutions which are already available and problems with them. Microsoft History Support: Post .Net 3.5 sp1 Microsoft’s Script manager supports history for websites using Update panels. This is achieved by enabling the ENABLE HISTORY property for the script manager and then the event “Page_Browser_Navigate” needs to be handled. So whenever the browser buttons are clicked the event is fired and the application can write code to do the navigation. The following articles provide good tutorials on how to do that http://www.asp.net/aspnet-in-net-35-sp1/videos/introduction-to-aspnet-ajax-history http://www.codeproject.com/KB/aspnet/ajaxhistorymanagement.aspx And Microsoft api internally creates an IFrame and changes the bookmark of the url. Unfortunately this has a bug and it does not work in Ie6 and 7 which are the major browsers but it works in ie8 and Firefox. And Microsoft has apparently fixed this bug in .Net 4.0. Following is the blog http://weblogs.asp.net/joshclose/archive/2008/11/11/asp-net-ajax-addhistorypoint-bug.aspx For solutions which are still running on .net 3.5 sp1 there is no solution which Microsoft offers so there is  are two way to solve this o   Disable the back button. o   Develop custom solution.   Disable back button Even though this might look like a very simple thing to do there are issues around doing this  because there is no event which can be manipulated from the javascript. The browser does not provide an api to do this. So most of the technical solution on internet offer work arounds like doing a history.forward(1) so that even if the user clicks a back button the destination page redirects the user to the original page. This is not a good customer experience and does not work for asp.net website where there are different views in the same page. There are other ways around detecting the window unload events and writing code there. So there are 2 events onbeforeUnload and onUnload and we can write code to show a confirmation message to the user. If we write code in onUnLoad then we can only show a message but it is too late to stop the navigation. And if we write on onBeforeUnLoad we can stop the navigation if the user clicks cancel but this event would be triggered for all AJAX calls and hyperlinks where the href is anything other than #. We can do this but the website has to be checked properly to ensure there are no links where href is not # otherwise the user would see a popup message saying “you are leaving the website”. Believe me after doing a lot of research on the back button disable I found it easier to support it rather than disabling the button. So I am going to discuss a solution which work  using jQuery with some tweaking. Custom Solution JQuery already provides an api to manage the history of a AJAX website - http://plugins.jquery.com/project/history We need to integrate this with Microsoft Page request manager so that both of them work in tandem. The page state is maintained in the cookie so that it can be passed to the server and I used jQuery cookie plug in for that – http://plugins.jquery.com/node/1386/release Firstly when the page loads there is a need to hook up all the events on the page which needs to cause browser history and following is the code to that. jQuery(document).ready(function() {             // Initialize history plugin.             // The callback is called at once by present location.hash.             jQuery.history.init(pageload);               // set onlick event for buttons             jQuery("a[@rel='history']").click(function() {                 //                 var hash = this.page;                 hash = hash.replace(/^.*#/, '');                 isAsyncPostBack = true;                 // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });         }); The above scripts basically gets all the DOM objects which have the attribute rel=”history” and add the event. In our test page we have the link button  which has the attribute rel set to history. <asp:LinkButton ID="Previous" rel="history" runat="server" onclick="PreviousOnClick">Previous</asp:LinkButton> <asp:LinkButton ID="AsyncPostBack" rel="history" runat="server" onclick="NextOnClick">Next</asp:LinkButton> <asp:LinkButton ID="HistoryLinkButton" runat="server" style="display:none" onclick="HistoryOnClick"></asp:LinkButton>   And you can see that there is an hidden HistoryLinkButton which used to send a sever side postback in case of browser back or previous buttons. And note that we need to use display:none and not visible= false because asp.net AJAX would disallow any post backs if visible=false. And in general the pageload event get executed on the client side when a back or forward is pressed and the function is shown below function pageload(hash) {                   if (hash) {                         if (!isAsyncPostBack) {                           jQuery.cookie("page", hash);                     __doPostBack("HistoryLinkButton", "");                 }                isAsyncPostBack = false;                             } else {                 // start page             jQuery("#load").empty();             }         }   As you can see in case there is an hash in the url we are basically do an asp.net AJAX post back using the following statement __doPostBack("HistoryLinkButton", ""); So whenever the user clicks back or forward the post back happens using the event statement we provide and Previous event code is invoked in the code behind.  We need to have the code to use the pageId present in the url to change the page content. And there is an important thing to note – because the hash is worked out using the pageId’s there is a need to recalculate the hash after every AJAX post back so following code is plugged in function ReWorkHash() {             jQuery("a[@rel='history']").unbind("click");             jQuery("a[@rel='history']").click(function() {                 //                 var hash = jQuery(this).attr("page");                 hash = hash.replace(/^.*#/, '');                 jQuery.cookie("page", hash);                 isAsyncPostBack = true;                                   // moves to a new page.                 // pageload is called at once.                 jQuery.history.load(hash);                 return true;             });        }   This code is executed from the code behind using ScriptManager RegisterClientScriptBlock as shown below –       ScriptManager.RegisterClientScriptBlock(this, typeof(_Default), "Recalculater", "ReWorkHash();", true);   A sample application is available to be downloaded at the following location – http://techconsulting.vpscustomer.com/Source/HistoryTest.zip And a working sample is available at – http://techconsulting.vpscustomer.com/Samples/Default.aspx

    Read the article

  • Ask the Readers: Backing Your Files Up – Local Storage versus the Cloud

    - by Asian Angel
    Backing up important files is something that all of us should do on a regular basis, but may not have given as much thought to as we should. This week we would like to know if you use local storage, cloud storage, or a combination of both to back your files up. Photo by camknows. For some people local storage media may be the most convenient and/or affordable way to back up their files. Having those files stored on media under your control can also provide a sense of security and peace of mind. But storing your files locally may also have drawbacks if something happens to your storage media. So how do you know whether the benefits outweigh the disadvantages or not? Here are some possible pros and cons that may affect your decision to use local storage to back up your files: Local Storage Pros You are in control of your data Your files are portable and can go with you when needed if using external or flash drives Files are accessible without an internet connection You can easily add more storage capacity as needed (additional drives, etc.) Cons You need to arrange room for your storage media (if you have multiple externals drives, etc.) Possible hardware failure No access to your files if you forget to bring your storage media with you or it is too bulky to bring along Theft and/or loss of home with all contents due to circumstances like fire If you are someone who is always on the go and needs to travel as lightly as possible, cloud storage may be the perfect way for you to back up and access your files. Perhaps your laptop has a hard-drive failure or gets stolen…unhappy events to be sure, but you will still have a copy of your files available. Perhaps a company wants to make sure their records, files, and other information are backed up off site in case of a major hardware or system failure…expensive and/or frustrating to fix if it happens, but once again there is a nice backup ready to go once things are fixed. As with local storage, here are some possible pros and cons that may influence your choice of cloud storage to back up your files: Cloud Storage Pros No need to carry around flash or bulky external drives All of your files are accessible wherever there is an internet connection No need to deal with local storage media (or its’ upkeep) Your files are still safe if your home is broken into or other unfortunate circumstances occur Cons Your files and data are not 100% under your control Possible hardware failure or loss of files on the part of your cloud storage provider (this could include a disgruntled employee wreaking havoc) No access to your files if you do not have an internet connection The cloud storage provider may eventually shutdown due to financial hardship or other unforeseen circumstances The possibility of your files and data being stolen by hackers due to a security breach on the part of your cloud storage provider You may also prefer to try and cover all of the possibilities by using both local and cloud storage to back up your files. If something happens to one, you always have the other to fall back on. Need access to those files at or away from home? As long as you have access to either your storage media or an internet connection, you are good to go. Maybe you are getting ready to choose a backup solution but are not sure which one would work better for you. Here is your chance to ask your fellow HTG readers which one they would recommend. Got a great backup solution already in place? Then be sure to share it with your fellow readers! How-To Geek Polls require Javascript. Please Click Here to View the Poll. Latest Features How-To Geek ETC The 20 Best How-To Geek Explainer Topics for 2010 How to Disable Caps Lock Key in Windows 7 or Vista How to Use the Avira Rescue CD to Clean Your Infected PC The Complete List of iPad Tips, Tricks, and Tutorials Is Your Desktop Printer More Expensive Than Printing Services? 20 OS X Keyboard Shortcuts You Might Not Know Winter Sunset by a Mountain Stream Wallpaper Add Sleek Style to Your Desktop with the Aston Martin Theme for Windows 7 Awesome WebGL Demo – Flight of the Navigator from Mozilla Sunrise on the Alien Desert Planet Wallpaper Add Falling Snow to Webpages with the Snowfall Extension for Opera [Browser Fun] Automatically Keep Up With the Latest Releases from Mozilla Labs in Firefox 4.0

    Read the article

  • Using NServiceBus behind a custom web service

    - by Michael Stephenson
    In this post I'd like to talk about an architecture scenario we had recently and how we were able to utilise NServiceBus to help us address this problem. Scenario Cognos is a reporting system used by one of my clients. A while back we developed a web service façade to allow line of business applications to be able to access reports from Cognos to support their various functions. The service was intended to provide access to reports which were quick running reports or pre-generated reports which could be accessed real-time on demand. One of the key aims of the web service was to provide a simple generic interface to allow applications to get any report without needing to worry about the complex .net SDK for Cognos. The web service also supported multi-hop kerberos delegation so that report data could be accesses under the context of the end user. This service was working well for a period of time. The Problem The problem we encountered was that reports were now also required to be available to batch processes. The original design was optimised for low latency so users would enjoy a positive experience, however when the batch processes started to request 250+ concurrent reports over an extended period of time you can begin to imagine the sorts of problems that come into play. The key problems this new scenario caused are: Users may be affected and the latency of on demand reports was significantly slower The Cognos infrastructure was not scaled sufficiently to be able to cope with these long peaks of load From a cost perspective it just isn't feasible to scale the Cognos infrastructure to be able to handle the load when it is only for a couple of hour window each night. We really needed to introduce a second pattern for accessing this service which would support high through-put scenarios. We also had little control over the batch process in terms of being able to throttle its load. We could however make some changes to the way it accessed the reports. The Approach My idea was to introduce a throttling mechanism between the Web Service Façade and Cognos. This would allow the batch processes to push reports requests hard at the web service which we were confident the web service can handle. The web service would then queue these requests and process them behind the scenes and make a call back to the batch application to provide the report once it had been accessed. In terms of technology we had some limitations because we were not able to use WCF or IIS7 where the MSMQ-Activated WCF services could have helped, but we did have MSMQ as an option and I thought NServiceBus could do just the job to help us here. The flow of how this would work was as follows: The batch applications would send a request for a report to the web service The web service uses NServiceBus to send the message to a Queue The NServiceBus Generic Host is running as a windows service with a message handler which subscribes to these messages The message handler gets the message, accesses the report from Cognos The message handler calls back to the original batch application, this is decoupled because the calling application provides a call back url The report gets into the batch application and is processed as normal This approach looks something like the below diagram: The key points are an application wanting to take advantage of the batch driven reports needs to do the following: Implement our call back contract Make a call to the service providing a call back url Provide a correlation ID so it knows how to tie each response back to its request What does NServiceBus offer in this solution So this scenario is not the typical messaging service bus type of solution people implement with NServiceBus, but it did offer the following: Simplified interaction with MSMQ Offered the ability to configure the number of processes working through the queue so we could find a balance between load on Cognos versus the applications end to end processing time NServiceBus offers retries and a way to manage failed messages NServiceBus offers a high availability setup The simple thing is that NServiceBus gave us the platform to build the solution on. We just implemented a message handler which functionally processed a message and we could rely on NServiceBus to do all of the hard work around managing the queues and all of the lower level things that would have took ages to write to any kind of robust level. Conclusion With this approach we were able to deal with a fairly significant performance issue with out too much rework. Hopefully this write up gives people some insight into ideas on how to leverage the excellent NServiceBus framework to help solve integration and high through-put scenarios.

    Read the article

< Previous Page | 64 65 66 67 68 69 70 71 72 73 74 75  | Next Page >