Search Results

Search found 19625 results on 785 pages for 'local groups'.

Page 773/785 | < Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >

  • Multiple Notifications Not Firing

    - by motionpotion
    I'm scheduling two notifications as shown below. The app is a long-lived app. One local notification is scheduled to run every hour. The other is scheduled to run once per day. Only the second scheduled notification (the hourly notifcation) fires. - (void)scheduleNotification { LogInfo(@"IN scheduleNotification - DELETEYESTERDAY NOTIFICATION SCHEDULED."); UILocalNotification *notif = [[UILocalNotification alloc] init]; NSDictionary *deleteDict = [NSDictionary dictionaryWithObject:@"DeleteYesterday" forKey:@"DeleteYesterday"]; NSCalendar *calendar = [NSCalendar currentCalendar]; NSDateComponents *components = [[NSDateComponents alloc] init]; components = [[NSCalendar currentCalendar] components:NSDayCalendarUnit | NSMonthCalendarUnit | NSYearCalendarUnit fromDate:[NSDate date]]; NSInteger day = [components day]; NSInteger month = [components month]; NSInteger year = [components year]; [components setDay: day]; [components setMonth: month]; [components setYear: year]; [components setHour: 00]; [components setMinute: 45]; [components setSecond: 0]; [calendar setTimeZone: [NSTimeZone systemTimeZone]]; NSDate *dateToFire = [calendar dateFromComponents:components]; notif.fireDate = dateToFire; notif.timeZone = [NSTimeZone systemTimeZone]; notif.repeatInterval = NSDayCalendarUnit; notif.userInfo = deleteDict; [[UIApplication sharedApplication] scheduleLocalNotification:notif]; } and then I schedule this after above: - (void)scheduleHeartBeat { LogInfo(@"IN scheduleHeartBeat - HEARTBEAT NOTIFICATION SCHEDULED."); UILocalNotification *heartbeat = [[UILocalNotification alloc] init]; NSDictionary *heartbeatDict = [NSDictionary dictionaryWithObject:@"HeartBeat" forKey:@"HeartBeat"]; heartbeat.userInfo = heartbeatDict; NSCalendar *calendar = [NSCalendar currentCalendar]; NSDateComponents *components = [[NSDateComponents alloc] init]; components = [[NSCalendar currentCalendar] components:NSDayCalendarUnit | NSMonthCalendarUnit | NSYearCalendarUnit fromDate:[NSDate date]]; NSInteger day = [components day]; NSInteger month = [components month]; NSInteger year = [components year]; [components setDay: day]; [components setMonth: month]; [components setYear: year]; [components setHour: 00]; [components setMinute: 50]; [components setSecond: 0]; [calendar setTimeZone: [NSTimeZone systemTimeZone]]; NSDate *dateToFire = [calendar dateFromComponents:components]; heartbeat.fireDate = dateToFire; heartbeat.timeZone = [NSTimeZone systemTimeZone]; heartbeat.repeatInterval = NSHourCalendarUnit; [[UIApplication sharedApplication] scheduleLocalNotification:heartbeat]; } The above are scheduled when the app launches in the viewDidLoad of the main view controller. - (void)viewDidLoad { [self scheduleNotification]; [self scheduleHeartBeat]; [super viewDidLoad]; //OTHER CODE HERE } Then in the appdelegate I have the following: - (void)application:(UIApplication *)application didReceiveLocalNotification:(UILocalNotification *)notification { LogInfo(@"IN didReceiveLocalNotification NOTIFICATION RECEIVED."); NSString *notificationHeartBeat = nil; NSString *notificationDeleteYesterday = nil; application.applicationIconBadgeNumber = 0; if (notification) { notificationHeartBeat = [notification.userInfo objectForKey:@"HeartBeat"]; notificationDeleteYesterday = [notification.userInfo objectForKey:@"DeleteYesterday"]; LogInfo(@"IN didReceiveLocalNotification HEARTBEAT NOTIFICATION TYPE: %@", notificationHeartBeat); LogInfo(@"IN didReceiveLocalNotification DELETEYESTERDAY NOTIFICATION TYPE: %@", notificationDeleteYesterday); } if ([notificationHeartBeat isEqualToString:@"HeartBeat"]) { //CREATE THE HEARTBEAT LogInfo(@"CREATING THE HEARTBEAT."); //CALL THE FUNCTIONALITY HERE THAT CREATES HEARTBEAT. } if ([notificationDeleteYesterday isEqualToString:@"DeleteYesterday"]) { //DELETE YESTERDAYS RECORDS LogInfo(@"DELETING YESTERDAYS RECORDS."); } } The notification that is scheduled last (scheduleHeartBeat) is the only notification that is fired. Could somebody help me figure out why this is happening?

    Read the article

  • Make wix installation set upgrade to same folder

    - by Magnus Akselvoll
    How can I make a major upgrade to an installation set (MSI) built with Wix install into the same folder as the original installation? The installation is correctly detected as an upgrade, but the directory selection screen is still shown and with the default value (not necessarily the current installation folder). Do I have to do manual work like saving the installation folder in a registry key upon first installing and then read this key upon upgrade? If so, is there any example? Or is there some easier way to achieve this in MSI / Wix? As reference I paste in my current Wix file below: <?xml version="1.0" encoding="utf-8"?> <!-- Package information --> <Package Keywords="Installer" Id="e85e6190-1cd4-49f5-8924-9da5fcb8aee8" Description="Installs MyCompany Integration Framework 1.0.0" Comments="Installs MyCompany Integration Framework 1.0.0" InstallerVersion="100" Compressed="yes" /> <Upgrade Id='9071eacc-9b5a-48e3-bb90-8064d2b2c45d'> <UpgradeVersion Property="PATCHFOUND" OnlyDetect="no" Minimum="0.0.1" IncludeMinimum="yes" Maximum="1.0.0" IncludeMaximum="yes"/> </Upgrade> <!-- Useless but necessary... --> <Media Id="1" Cabinet="MyCompany.cab" EmbedCab="yes" /> <!-- Precondition: .Net 2 must be installed --> <Condition Message='This setup requires the .NET Framework 2 or higher.'> <![CDATA[MsiNetAssemblySupport >= "2.0.50727"]]> </Condition> <Directory Id="TARGETDIR" Name="SourceDir"> <Directory Id="MyCompany" Name="MyCompany"> <Directory Id="INSTALLDIR" Name="Integrat" LongName="MyCompany Integration Framework"> <Component Id="MyCompanyDllComponent" Guid="4f362043-03a0-472d-a84f-896522ce7d2b" DiskId="1"> <File Id="MyCompanyIntegrationDll" Name="IbIntegr.dll" src="..\Build\MyCompany.Integration.dll" Vital="yes" LongName="MyCompany.Integration.dll" /> <File Id="MyCompanyServiceModelDll" Name="IbSerMod.dll" src="..\Build\MyCompany.ServiceModel.dll" Vital="yes" LongName="MyCompany.ServiceModel.dll" /> </Component> <!-- More components --> </Directory> </Directory> </Directory> <Feature Id="MyCompanyProductFeature" Title='MyCompany Integration Framework' Description='The complete package' Display='expand' Level="1" InstallDefault='local' ConfigurableDirectory="INSTALLDIR"> <ComponentRef Id="MyCompanyDllComponent" /> </Feature> <!-- Task scheduler application. It has to be used as a property --> <Property Id="finaltaskexe" Value="MyCompany.Integration.Host.exe" /> <Property Id="WIXUI_INSTALLDIR" Value="INSTALLDIR" /> <InstallExecuteSequence> <!-- command must be executed: MyCompany.Integration.Host.exe /INITIALCONFIG parameters.xml --> <Custom Action='PropertyAssign' After='InstallFinalize'>NOT Installed AND NOT PATCHFOUND</Custom> <Custom Action='LaunchFile' After='InstallFinalize'>NOT Installed AND NOT PATCHFOUND</Custom> <RemoveExistingProducts Before='CostInitialize' /> </InstallExecuteSequence> <!-- execute comand --> <CustomAction Id='PropertyAssign' Property='PathProperty' Value='[INSTALLDIR][finaltaskexe]' /> <CustomAction Id='LaunchFile' Property='PathProperty' ExeCommand='/INITIALCONFIG "[INSTALLDIR]parameters.xml"' Return='asyncNoWait' /> <!-- User interface information --> <UIRef Id="WixUI_InstallDir" /> <UIRef Id="WixUI_ErrorProgressText" />

    Read the article

  • PHP Form - Empty input enter this text - Validation

    - by James Skelton
    No doubt very simple question for someone with php knowledge. I have a form with a datepicker, all is fine when a user has selected a date the email is send with: Date: 2012 04 10 But i would like if the user has skipped this and left blank (as i have not made this required) to send as: Date: Not Entered (<-- Or something) Instead at the minute of course it reads: Date: Form input <input type="text" class="form-control" id="datepicker" name="datepicker" size="50" value="Date Of Wedding" /> This is the validator $(document).ready(function(){ //validation contact form $('#submit').click(function(event){ event.preventDefault(); var fname = $('#name').val(); var validInput = new RegExp(/^[a-zA-Z0-9\s]+$/); var email = $('#email').val(); var validEmail = new RegExp(/^([a-zA-Z0-9_\.\-])+\@(([a-zA-Z0-9\-])+\.)+([a-zA-Z0-9]{2,4})+$/); var message = $('#message').val(); if(fname==''){ showError('<div class="alert alert-danger">Please enter your name.</div>', $('#name')); $('#name').addClass('required'); return;} if(!validInput.test(fname)){ showError('<div class="alert alert-danger">Please enter a valid name.</div>', $('#name')); $('#name').addClass('required'); return;} if(email==''){ showError('<div class="alert alert-danger">Please enter an email address.</div>', $('#email')); $('#email').addClass('required'); return;} if(!validEmail.test(email)){ showError('<div class="alert alert-danger">Please enter a valid email.</div>', $('#email')); $('#email').addClass('required'); return;} if(message==''){ showError('<div class="alert alert-danger">Please enter a message.</div>', $('#message')); $('#message').addClass('required'); return;} // setup some local variables var request; var form = $(this).closest('form'); // serialize the data in the form var serializedData = form.serialize(); // fire off the request to /contact.php request = $.ajax({ url: "contact.php", type: "post", data: serializedData }); // callback handler that will be called on success request.done(function (response, textStatus, jqXHR){ $('.contactWrap').show( 'slow' ).fadeIn("slow").html(' <div class="alert alert-success centered"><h3>Thank you! Your message has been sent.</h3></div> '); }); // callback handler that will be called on failure request.fail(function (jqXHR, textStatus, errorThrown){ // log the error to the console console.error( "The following error occured: "+ textStatus, errorThrown ); }); }); //remove 'required' class and hide error $('input, textarea').keyup( function(event){ if($(this).hasClass('required')){ $(this).removeClass('required'); $('.error').hide("slow").fadeOut("slow"); } }); // show error showError = function (error, target){ $('.error').removeClass('hidden').show("slow").fadeIn("slow").html(error); $('.error').data('target', target); $(target).focus(); console.log(target); console.log(error); return; } });

    Read the article

  • Opening of the spWeb.ContentTypes gives SOAP Exception 0x80004004

    - by mdi
    Hi everybody! I have the code which going through the sharepoint contenttypes and changes needed field display names. On my local server everything works fine, but on the client side it gives me an error: Microsoft.SharePoint.SPException: Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT)) --- System.Runtime.InteropServices.COMException (0x80004004): Operation aborted (Exception from HRESULT: 0x80004004 (E_ABORT)) at Microsoft.SharePoint.Library.SPRequestInternalClass.OpenWebInternal(String bstrUrl, Guid& pguidID, String& pbstrRequestAccessEmail, UInt32& pwebVersion, String& pbstrServerRelativeUrl, UInt32& pnLanguage, UInt32& pnLocale, String& pbstrDefaultTheme, String& pbstrDefaultThemeCSSUrl, String& pbstrAlternateCSSUrl, String& pbstrCustomizedCssFileList, String& pbstrCustomJSUrl, String& pbstrAlternateHeaderUrl, String& pbstrMasterUrl, String& pbstrCustomMasterUrl, String& pbstrSiteLogoUrl, String& pbstrSiteLogoDescription, Object& pvarUser, Boolean& pvarIsAuditor, Int32& plSiteFlags) at Microsoft.SharePoint.Library.SPRequest.OpenWebInternal(String bstrUrl, Guid& pguidID, String& pbstrRequestAccessEmail, UInt32& pwebVersion, String& pbstrServerRelativeUrl, UInt32& pnLanguage, UInt32& pnLocale, String& pbstrDefaultTheme, String& pbstrDefaultThemeCSSUrl, String& pbstrAlternateCSSUrl, String& pbstrCustomizedCssFileList, String& pbstrCustomJSUrl, String& pbstrAlternateHeaderUrl, String& pbstrMasterUrl, String& pbstrCustomMasterUrl, String& pbstrSiteLogoUrl, String& pbstrSiteLogoDescription, Object& pvarUser, Boolean& pvarIsAuditor, Int32& plSiteFlags) --- End of inner exception stack trace --- at Microsoft.SharePoint.Library.SPRequest.OpenWebInternal(String bstrUrl, Guid& pguidID, String& pbstrRequestAccessEmail, UInt32& pwebVersion, String& pbstrServerRelativeUrl, UInt32& pnLanguage, UInt32& pnLocale, String& pbstrDefaultTheme, String& pbstrDefaultThemeCSSUrl, String& pbstrAlternateCSSUrl, String& pbstrCustomizedCssFileList, String& pbstrCustomJSUrl, String& pbstrAlternateHeaderUrl, String& pbstrMasterUrl, String& pbstrCustomMasterUrl, String& pbstrSiteLogoUrl, String& pbstrSiteLogoDescription, Object& pvarUser, Boolean& pvarIsAuditor, Int32& plSiteFlags) at Microsoft.SharePoint.SPWeb.InitWebPublic() at Microsoft.SharePoint.SPWeb.get_ServerRelativeUrl() at Microsoft.SharePoint.SPWeb.get_Url() at Microsoft.SharePoint.SPContentTypeCollection.FetchCollection() at Microsoft.SharePoint.SPContentTypeCollection..ctor(SPWeb web, Boolean bAll) at Microsoft.SharePoint.SPWeb.get_ContentTypes() the code is below: SPWebApplication webApp = SPWebService.ContentService.WebApplications[someGuid]; foreach (SPSite spSite in webApp.Sites) { using (SPWeb spWeb = spSite.RootWeb) { try { foreach (SPContentType spContentType in spWeb.ContentTypes) { ... }}}.. Could anybody provide me with workaround or with the reason of the problem.

    Read the article

  • Perl Script to search and replace in .SQL query file with user inputs

    - by T.Mount
    I have a .SQL file containing a large number of queries. They are being run against a database containing data for multiple states over multiple years. The machine I am running this on can only handle running the queries for one state, in one year, at a time. I am trying to create a Perl script that takes user input for the state abbreviation, the state id number, and the year. It then creates a directory for that state and year. Then it opens the "base" .SQL file and searches and replaces the base state id and year with the user input, and saves this new .SQL file to the created directory. The current script I have (below) stops at open(IN,'<$infile') with "Can't open [filename]" It seems that it is having difficulty finding or opening the .SQL file. I have quadruple-checked to make sure the paths are correct, and I have even tried replacing the $path with an absolute path for the base file. If it was having trouble with creating the new file I'd have more direction, but since it can't find/open the base file I do not know how to proceed. #!/usr/local/bin/perl use Cwd; $path = getcwd(); #Cleans up the path $path =~ s/\\/\//sg; #User inputs print "What is the 2 letter state abbreviation for the state? Ex. 'GA'\n"; $stlet = <>; print "What is the 2 digit state abbreviation for the state? Ex. '13'\n"; $stdig = <>; print "What four-digit year are you doing the calculations for? Ex. '2008'\n"; $year = <>; chomp $stlet; chomp $stdig; chomp $year; #Creates the directory mkdir($stlet); $new = $path."\/".$stlet; mkdir("$new/$year"); $infile = '$path/Base/TABLE_1-26.sql'; $outfile = '$path/$stlet/$year/TABLE_1-26.sql'; open(IN,'<$infile') or die "Can't open $infile: $!\n"; open(OUT,">$infile2") or die "Can't open $outfile: $!\n"; print "Working..."; while (my $search = <IN>) { chomp $search; $search =~ s/WHERE pop.grp = 132008/WHERE pop.grp = $stdig$year/g; print OUT "$search\n"; } close(IN); close(OUT); I know I also probably need to tweak the regular expression some, but I'm trying to take things one at a time. This is my first Perl script, and I haven't really been able to find anything that handles .SQL files like this that I can understand. Thank you!

    Read the article

  • "Undefined reference to"

    - by user1332364
    I know that there are a lot of questions somewhat related to this one, but they answers are a bit hard for me to make sense of. I'm receiving the following error for a few different lines of code: C:\Users\Jeff\AppData\Local\Temp\ccAixtmT.o:football.cpp:(.text+0x6f0): undefined reference to `Player::set_values(int, std::string, float)' From these blocks of code: class Player { int playerNum; string playerPos; float playerRank; public: void set_values(int, string, float); float get_rank(){ return playerRank; }; bool operator == (const Player &p1/*, const Player &p2*/) const { if(&p1.playerNum == &playerNum && &p1.playerPos == &playerPos && &p1.playerRank == &playerRank) return true; else return false; }; }; And this being the main function referencing the subclass: int main() { ifstream infile; infile.open ("input.txt", ifstream::in); int numTeams; string command; while(!infile.fail() && !infile.eof()){ infile >> numTeams; string name; Player p; int playNum; string playPos; float playRank; Player all[11]; float ranks[11]; Team allTeams[numTeams]; for(int i=0; i<numTeams; i++){ infile >> name; for(int j=0; j<11; j++){ infile >> playNum; infile >> playPos; infile >> playRank; if(playPos == "QB") p.set_values(playNum, playPos, (playRank*2.0)); else if(playPos == "RB") p.set_values(playNum, playPos, (playRank*1.5)); else if(playPos == "WR") p.set_values(playNum, playPos, (playRank/1.8)); else if(playPos == "TE") p.set_values(playNum, playPos, (playRank*1.1)); else if(playPos == "GD") p.set_values(playNum, playPos, (playRank/2.0)); else if(playPos == "TC") p.set_values(playNum, playPos, (playRank/2.2)); else if(playPos == "CR") p.set_values(playNum, playPos, (playRank/1.2)); all[j] = p; allTeams[i].set_values(all, name); } } infile >> command; if (command == "play"){ int t1; int t2; infile >> t1; infile >> t2; play(allTeams[t1], allTeams[t2]); } else { int t1; int p1; int t2; int p2; swap(allTeams[t1], allTeams[t1].get_player(p1), allTeams[t2], allTeams[t2].get_player(p2)); } } }

    Read the article

  • How can I make three partials into just one in rails where the :collection is the same?

    - by Angela
    I have three partials that I'd like to consolidate into one. They share the same collection, but each gets passed its own :local variable. Those variables are used for specific Models, so as a result, I have three different calls to the partial and three different partials. Here's the repetitive code: <% for email in campaign.emails %> <h4><%= link_to email.title, email %> <%= email.days %> days</h4> <% @contacts= campaign.contacts.find(:all, :order => "date_entered ASC" )%> <!--contacts collection--> <!-- render the information for each contact --> <%= render :partial => "contact_email", :collection => @contacts, :locals => {:email => email} %> <% end %> Calls in this Campaign: <% for call in campaign.calls %> <h4><%= link_to call.title, call %> <%= call.days %> days</h4> <% @contacts= campaign.contacts.find(:all, :order => "date_entered ASC" )%> <!--contacts collection--> <!-- render the information for each contact --> <%= render :partial => "contact_call", :collection => @contacts, :locals => {:call => call} %> <% end %> Letters in this Campaign: <% for letter in campaign.letters %> <h4><%= link_to letter.title, letter %> <%= letter.days %> days</h4> <% @contacts= campaign.contacts.find(:all, :order => "date_entered ASC" )%> <!--contacts collection--> <!-- render the information for each contact --> <%= render :partial => "contact_letter", :collection => @contacts, :locals => {:letter => letter} %> <% end %> An example of one of the partials is as follows: < div id="contact_email_partial"> <% if from_today(contact_email, email.days) < 0 %> <% if show_status(contact_email, email) == 'no status'%> <p> <%= full_name(contact_email) %> <% unless contact_email.statuses.empty?%> (<%= contact_email.statuses.find(:last).status%>) <% end %> is <%= from_today(contact_email,email.days).abs%> days overdue: <%= do_event(contact_email, email) %> <%= link_to_remote "Skip Email Remote", :url => skip_contact_email_url(contact_email,email), :update => "update-area-#{contact_email.id}-#{email.id}" %> <span id='update-area-<%="#{contact_email.id}-#{email.id}"%>'> </span> <% end %> <% end %> </div>

    Read the article

  • Opening a file with a variable as name and checking for undefined values

    - by Harm De Weirdt
    I'm having some problems writing data into a file using Perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it.) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write to this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file I made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub. Any help is appreciated, Harm

    Read the article

  • Opening a file with a variable as name & checking for undefined values

    - by Harm De Weirdt
    Hello everyone. I'm having some problems writing data into a file using perl. sub startNewOrder{ my $name = makeUniqueFileName(); open (ORDER, ">$name.txt") or die "can't open file: $!\n"; format ORDER_TOP = PRODUCT<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<CODE<<<<<<<<AANTAL<<<<EENHEIDSPRIJS<<<<<<TOTAAL<<<<<<< . format ORDER = @<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<< @<<<<<<<< @<<<< @<<<<<< @<<<<< $title, $code, $amount, $price, $total . close (ORDER); } This is the sub I use to make the file. (I translated most of it) The makeUniqueFileName method makes a fileName based upon current time("minuteshoursdayOrder"). The problem now is that I have to write in this file in another sub. sub addToOrder{ print "give productcode:"; $code = <STDIN>; chop $code; print "Give amount:"; $amount = <STDIN>; chop $amount; if($inventory{$code} eq undef){ #Does the product exist? print "This product does not exist"; }elsif($inventory{$code}[2] < $amount && !defined($inventaris{$code}[2]) ){ #Is there enough in the inventory? print "There is not enough in stock" }else{ $inventory{$code}[2] -= $amount; #write in order file open (ORDER ">>$naam.txt") or die "can't open file: $!\n"; $title = $inventory{$code}[0]; $code = $code; $amount = $inventory{$code}[2]; $price = $inventory{$code}[1]; $total = $inventory{$code}[1]; write; close(ORDER); } %inventory is a hashtable that has the productcode as key and an array with the title, price and amount as value. There are two problems here: when I enter an invalid product number, I still have to enter an amount even while my code says it should print the error directly after checking if there is a product with the given code. The second problem is that the writing doesn't seem to work. It always give's a "No such file or directory" error. Is there a way to open the ORDER file i made in the first sub without having to make $name not local? Or just a way to write in this file? I really don't know how to start here. I can't really find much info on writing a file that has been closed before, and in a different sub.. Any help is appreciated, Harm

    Read the article

  • jQuery .find() doesn't return data in IE but does in Firefox and Chrome

    - by Steve Hiner
    I helped a friend out by doing a little web work for him. Part of what he needed was an easy way to change a couple pieces of text on his site. Rather than having him edit the HTML I decided to provide an XML file with the messages in it and I used jQuery to pull them out of the file and insert them into the page. It works great... In Firefox and Chrome, not so great in IE7. I was hoping one of you could tell me why. I did a fair but of googling but couldn't find what I'm looking for. Here's the XML: <?xml version="1.0" encoding="utf-8" ?> <messages> <message type="HeaderMessage"> This message is put up in the header area. </message> <message type="FooterMessage"> This message is put in the lower left cell. </message> </messages> And here's my jQuery call: <script type="text/javascript"> $(document).ready(function() { $.get('messages.xml', function(d) { //I have confirmed that it gets to here in IE //and it has the xml loaded. //alert(d); gives me a message box with the xml text in it //alert($(d).find('message')); gives me "[object Object]" //alert($(d).find('message')[0]); gives me "undefined" //alert($(d).find('message').Length); gives me "undefined" $(d).find('message').each(function() { //But it never gets to here in IE var $msg = $(this); var type = $msg.attr("type"); var message = $msg.text(); switch (type) { case "HeaderMessage": $("#HeaderMessageDiv").html(message); break; case "FooterMessage": $("#footermessagecell").html(message); break; default: } }); }); }); </script> Is there something I need to do differently in IE? Based on the message box with [object Object] I'm assumed that .find was working in IE but since I can't index into the array with [0] or check it's Length I'm guessing that means .find isn't returning any results. Any reason why that would work perfectly in Firefox and Chrome but fail in IE? I'm a total newbie with jQuery so I hope I haven't just done something stupid. That code above was scraped out of a forum and modified to suit my needs. Since jQuery is cross-platform I figured I wouldn't have to deal with this mess. Edit: I've found that if I load the page in Visual Studio 2008 and run it then it will work in IE. So it turns out it always works when run through the development web server. Now I'm thinking IE just doesn't like doing .find in XML loaded off of my local drive so maybe when this is on an actual web server it will work OK. I have confirmed that it works fine when browsed from a web server. Must be a peculiarity with IE. I'm guessing it's because the web server sets the mime type for the xml data file transfer and without that IE doesn't parse the xml correctly.

    Read the article

  • .NET WebRequest.PreAuthenticate not quite what it sounds like

    - by Rick Strahl
    I’ve run into the  problem a few times now: How to pre-authenticate .NET WebRequest calls doing an HTTP call to the server – essentially send authentication credentials on the very first request instead of waiting for a server challenge first? At first glance this sound like it should be easy: The .NET WebRequest object has a PreAuthenticate property which sounds like it should force authentication credentials to be sent on the first request. Looking at the MSDN example certainly looks like it does: http://msdn.microsoft.com/en-us/library/system.net.webrequest.preauthenticate.aspx Unfortunately the MSDN sample is wrong. As is the text of the Help topic which incorrectly leads you to believe that PreAuthenticate… wait for it - pre-authenticates. But it doesn’t allow you to set credentials that are sent on the first request. What this property actually does is quite different. It doesn’t send credentials on the first request but rather caches the credentials ONCE you have already authenticated once. Http Authentication is based on a challenge response mechanism typically where the client sends a request and the server responds with a 401 header requesting authentication. So the client sends a request like this: GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive and the server responds with: HTTP/1.1 401 Unauthorized Cache-Control: private Content-Type: text/html; charset=utf-8 Server: Microsoft-IIS/7.5 WWW-Authenticate: basic realm=rasnote" X-AspNet-Version: 2.0.50727 WWW-Authenticate: Negotiate WWW-Authenticate: NTLM WWW-Authenticate: Basic realm="rasnote" X-Powered-By: ASP.NET Date: Tue, 27 Oct 2009 00:58:20 GMT Content-Length: 5163 plus the actual error message body. The client then is responsible for re-sending the current request with the authentication token information provided (in this case Basic Auth): GET /wconnect/admin/wc.wc?_maintain~ShowStatus HTTP/1.1 Host: rasnote User-Agent: Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506) Accept: text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8 Accept-Language: en,de;q=0.7,en-us;q=0.3 Accept-Encoding: gzip,deflate Accept-Charset: ISO-8859-1,utf-8;q=0.7,*;q=0.7 Keep-Alive: 300 Connection: keep-alive Cookie: TimeTrakker=2HJ1998WH06696; WebLogCommentUser=Rick Strahl|http://www.west-wind.com/|[email protected]; WebStoreUser=b8bd0ed9 Authorization: Basic cgsf12aDpkc2ZhZG1zMA== Once the authorization info is sent the server responds with the actual page result. Now if you use WebRequest (or WebClient) the default behavior is to re-authenticate on every request that requires authorization. This means if you look in  Fiddler or some other HTTP client Proxy that captures requests you’ll see that each request re-authenticates: Here are two requests fired back to back: and you can see the 401 challenge, the 200 response for both requests. If you watch this same conversation between a browser and a server you’ll notice that the first 401 is also there but the subsequent 401 requests are not present. WebRequest.PreAuthenticate And this is precisely what the WebRequest.PreAuthenticate property does: It’s a caching mechanism that caches the connection credentials for a given domain in the active process and resends it on subsequent requests. It does not send credentials on the first request but it will cache credentials on subsequent requests after authentication has succeeded: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rick", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential("rstrahl", "secret", "rasnote"); req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested; req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); which results in the desired sequence: where only the first request doesn’t send credentials. This is quite useful as it saves quite a few round trips to the server – bascially it saves one auth request request for every authenticated request you make. In most scenarios I think you’d want to send these credentials this way but one downside to this is that there’s no way to log out the client. Since the client always sends the credentials once authenticated only an explicit operation ON THE SERVER can undo the credentials by forcing another login explicitly (ie. re-challenging with a forced 401 request). Forcing Basic Authentication Credentials on the first Request On a few occasions I’ve needed to send credentials on a first request – mainly to some oddball third party Web Services (why you’d want to use Basic Auth on a Web Service is beyond me – don’t ask but it’s not uncommon in my experience). This is true of certain services that are using Basic Authentication (especially some Apache based Web Services) and REQUIRE that the authentication is sent right from the first request. No challenge first. Ugly but there it is. Now the following works only with Basic Authentication because it’s pretty straight forward to create the Basic Authorization ‘token’ in code since it’s just an unencrypted encoding of the user name and password into base64. As you might guess this is totally unsecure and should only be used when using HTTPS/SSL connections (i’m not in this example so I can capture the Fiddler trace and my local machine doesn’t have a cert installed, but for production apps ALWAYS use SSL with basic auth). The idea is that you simply add the required Authorization header to the request on your own along with the authorization string that encodes the username and password: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "rick"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.AuthenticationLevel = System.Net.Security.AuthenticationLevel.MutualAuthRequested;req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); This works and causes the request to immediately send auth information to the server. However, this only works with Basic Auth because you can actually create the authentication credentials easily on the client because it’s essentially clear text. The same doesn’t work for Windows or Digest authentication since you can’t easily create the authentication token on the client and send it to the server. Another issue with this approach is that PreAuthenticate has no effect when you manually force the authentication. As far as Web Request is concerned it never sent the authentication information so it’s not actually caching the value any longer. If you run 3 requests in a row like this: string url = "http://rasnote/wconnect/admin/wc.wc?_maintain~ShowStatus"; HttpWebRequest req = HttpWebRequest.Create(url) as HttpWebRequest; string user = "ricks"; string pwd = "secret"; string domain = "www.west-wind.com"; string auth = "Basic " + Convert.ToBase64String(System.Text.Encoding.Default.GetBytes(user + ":" + pwd)); req.PreAuthenticate = true; req.Headers.Add("Authorization", auth); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; WebResponse resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); resp.Close(); req = HttpWebRequest.Create(url) as HttpWebRequest; req.PreAuthenticate = true; req.Credentials = new NetworkCredential(user, pwd, domain); req.UserAgent = ": Mozilla/5.0 (Windows; U; Windows NT 6.1; en-US; rv:1.9.1.3) Gecko/20090824 Firefox/3.5.3 (.NET CLR 4.0.20506)"; resp = req.GetResponse(); you’ll find the trace looking like this: where the first request (the one we explicitly add the header to) authenticates, the second challenges, and any subsequent ones then use the PreAuthenticate credential caching. In effect you’ll end up with one extra 401 request in this scenario, which is still better than 401 challenges on each request. Getting Access to WebRequest in Classic .NET Web Service Clients If you’re running a classic .NET Web Service client (non-WCF) one issue with the above is how do you get access to the WebRequest to actually add the custom headers to do the custom Authentication described above? One easy way is to implement a partial class that allows you add headers with something like this: public partial class TaxService { protected NameValueCollection Headers = new NameValueCollection(); public void AddHttpHeader(string key, string value) { this.Headers.Add(key,value); } public void ClearHttpHeaders() { this.Headers.Clear(); } protected override WebRequest GetWebRequest(Uri uri) { HttpWebRequest request = (HttpWebRequest) base.GetWebRequest(uri); request.Headers.Add(this.Headers); return request; } } where TaxService is the name of the .NET generated proxy class. In code you can then call AddHttpHeader() anywhere to add additional headers which are sent as part of the GetWebRequest override. Nice and simple once you know where to hook it. For WCF there’s a bit more work involved by creating a message extension as described here: http://weblogs.asp.net/avnerk/archive/2006/04/26/Adding-custom-headers-to-every-WCF-call-_2D00_-a-solution.aspx. FWIW, I think that HTTP header manipulation should be readily available on any HTTP based Web Service client DIRECTLY without having to subclass or implement a special interface hook. But alas a little extra work is required in .NET to make this happen Not a Common Problem, but when it happens… This has been one of those issues that is really rare, but it’s bitten me on several occasions when dealing with oddball Web services – a couple of times in my own work interacting with various Web Services and a few times on customer projects that required interaction with credentials-first services. Since the servers determine the protocol, we don’t have a choice but to follow the protocol. Lovely following standards that implementers decide to ignore, isn’t it? :-}© Rick Strahl, West Wind Technologies, 2005-2010Posted in .NET  CSharp  Web Services  

    Read the article

  • How to reduce iOS AVPlayer start delay

    - by Bernt Habermeier
    Note, for the below question: All assets are local on the device -- no network streaming is taking place. The videos contain audio tracks. I'm working on an iOS application that requires playing video files with minimum delay to start the video clip in question. Unfortunately we do not know what specific video clip is next until we actually need to start it up. Specifically: When one video clip is playing, we will know what the next set of (roughly) 10 video clips are, but we don't know which one exactly, until it comes time to 'immediately' play the next clip. What I've done to look at actual start delays is to call addBoundaryTimeObserverForTimes on the video player, with a time period of one millisecond to see when the video actually started to play, and I take the difference of that time stamp with the first place in the code that indicates which asset to start playing. From what I've seen thus-far, I have found that using the combination of AVAsset loading, and then creating an AVPlayerItem from that once it's ready, and then waiting for AVPlayerStatusReadyToPlay before I call play, tends to take between 1 and 3 seconds to start the clip. I've since switched to what I think is roughly equivalent: calling [AVPlayerItem playerItemWithURL:] and waiting for AVPlayerItemStatusReadyToPlay to play. Roughly same performance. One thing I'm observing is that the first AVPlayer item load is slower than the rest. Seems one idea is to pre-flight the AVPlayer with a short / empty asset before trying to play the first video might be of good general practice. [http://stackoverflow.com/questions/900461/slow-start-for-avaudioplayer-the-first-time-a-sound-is-played] I'd love to get the video start times down as much as possible, and have some ideas of things to experiment with, but would like some guidance from anyone that might be able to help. Update: idea 7, below, as-implemented yields switching times of around 500 ms. This is an improvement, but it it'd be nice to get this even faster. Idea 1: Use N AVPlayers (won't work) Using ~ 10 AVPPlayer objects and start-and-pause all ~ 10 clips, and once we know which one we really need, switch to, and un-pause the correct AVPlayer, and start all over again for the next cycle. I don't think this works, because I've read there is roughly a limit of 4 active AVPlayer's in iOS. There was someone asking about this on StackOverflow here, and found out about the 4 AVPlayer limit: fast-switching-between-videos-using-avfoundation Idea 2: Use AVQueuePlayer (won't work) I don't believe that shoving 10 AVPlayerItems into an AVQueuePlayer would pre-load them all for seamless start. AVQueuePlayer is a queue, and I think it really only makes the next video in the queue ready for immediate playback. I don't know which one out of ~10 videos we do want to play back, until it's time to start that one. ios-avplayer-video-preloading Idea 3: Load, Play, and retain AVPlayerItems in background (not 100% sure yet -- but not looking good) I'm looking at if there is any benefit to load and play the first second of each video clip in the background (suppress video and audio output), and keep a reference to each AVPlayerItem, and when we know which item needs to be played for real, swap that one in, and swap the background AVPlayer with the active one. Rinse and Repeat. The theory would be that recently played AVPlayer/AVPlayerItem's may still hold some prepared resources which would make subsequent playback faster. So far, I have not seen benefits from this, but I might not have the AVPlayerLayer setup correctly for the background. I doubt this will really improve things from what I've seen. Idea 4: Use a different file format -- maybe one that is faster to load? I'm currently using .m4v's (video-MPEG4) H.264 format. I have not played around with other formats, but it may well be that some formats are faster to decode / get ready than others. Possible still using video-MPEG4 but with a different codec, or maybe quicktime? Maybe a lossless video format where decoding / setup is faster? Idea 5: Combination of lossless video format + AVQueuePlayer If there is a video format that is fast to load, but maybe where the file size is insane, one idea might be to pre-prepare the first 10 seconds of each video clip with a version that is boated but faster to load, but back that up with an asset that is encoded in H.264. Use an AVQueuePlayer, and add the first 10 seconds in the uncompressed file format, and follow that up with one that is in H.264 which gets up to 10 seconds of prepare/preload time. So I'd get 'the best' of both worlds: fast start times, but also benefits from a more compact format. Idea 6: Use a non-standard AVPlayer / write my own / use someone else's Given my needs, maybe I can't use AVPlayer, but have to resort to AVAssetReader, and decode the first few seconds (possibly write raw file to disk), and when it comes to playback, make use of the raw format to play it back fast. Seems like a huge project to me, and if I go about it in a naive way, it's unclear / unlikely to even work better. Each decoded and uncompressed video frame is 2.25 MB. Naively speaking -- if we go with ~ 30 fps for the video, I'd end up with ~60 MB/s read-from-disk requirement, which is probably impossible / pushing it. Obviously we'd have to do some level of image compression (perhaps native openGL/es compression formats via PVRTC)... but that's kind crazy. Maybe there is a library out there that I can use? Idea 7: Combine everything into a single movie asset, and seekToTime One idea that might be easier than some of the above, is to combine everything into a single movie, and use seekToTime. The thing is that we'd be jumping all around the place. Essentially random access into the movie. I think this may actually work out okay: avplayer-movie-playing-lag-in-ios5 Which approach do you think would be best? So far, I've not made that much progress in terms of reducing the lag.

    Read the article

  • Quick guide to Oracle IRM 11g: Server configuration

    - by Simon Thorpe
    Quick guide to Oracle IRM 11g index Welcome to the second article in this quick quide to Oracle IRM 11g. Hopefully you've just finished the first article which takes you through deploying the software onto a Linux server. This article walks you through the configuration of this new service and contains a subset of information from the official documentation and is focused on installing the server on Oracle Enterprise Linux. If you are planning to deploy on a non-Linux platform, you will need to reference the documentation for platform specific information. Contents Introduction Create IRM WebLogic Domain Starting the Admin Server and initial configuration Introduction In the previous article the database was prepared, the WebLogic Application Server installed and the files required for an IRM server installed. But we don't actually have a configured system yet. We need to now create a WebLogic Domain in which the IRM server will run, then configure some of the settings and crypography so that we can create a context and be ready to seal some content and test it all works. This article doesn't cover the configuration of SSL communication from client to server. This is quite a big topic and a separate article has been dedicated for this area. In these articles I also use the hostname, irm.company.internal to reference the IRM server and later on use the hostname irm.company.com in reference to the public facing service. Create IRM WebLogic Domain First step is creating the WebLogic domain, in a console switch to the newly created IRM installation folder as shown below and we will run the domain configuration wizard. [oracle@irm /]$ cd /oracle/middleware/Oracle_IRM/common/bin [oracle@irm bin]$ ./config.sh First thing the wizard will ask is if you wish to create a new or extend an existing domain. This guide is creating a standalone system so you should select to create a new domain. Next step is to choose what technologies from the Oracle ECM Suite you wish this domain to host. You are only interested in selecting the option "Oracle Information Rights Management". When you select this check box you will notice that it also selects "Oracle Enterprise Manager" and "Oracle JRF" as these are dependencies of the IRM server. You then need to specify where you wish to place the domain files. I usually just change the domain name from base_domain or irm_domain and leave the others with their defaults. Now the domain will have a single user initially and by default this user is called "weblogic". I usually change this account name to "sysadmin" or "administrator", but in this guide lets just accept the default. With respects to the next dialog, again for eval or dev reasons, leave the server startup mode as development. The JDK should also be automatically detected. We now need to provide details of the database. This guide is using the Oracle 11gR2 database and the settings I used can be seen in the image to the right. There is a lot of configuration that can now be done for the admin server, any managed servers and where the deployments reside. In this guide I am leaving all of these to their defaults so do not check any of the boxes. However I will on this blog be detailing later how you can go back and setup things such as automated startup of an IRM server which require changes to these default settings. But for now, lets leave it all alone and just click next. Now we are ready to install. Note that from this dialog you can scroll the left window and see there are going to be two servers created from the defaults. The AdminServer which is where you modify settings for the WebLogic Server and also hosts the Oracle Enterprise Manager for IRM which allows to monitor the IRM service performance and also make service related settings (which we shortly do below) and the IRM_server1 which hosts the actual IRM services themselves. So go right ahead and hit create, the process is pretty quick and usually under 10 minutes. When the domain creation ends, it will give you the URL to the admin server. It's worth noting this down and the URL is usually; http://irm.company.internal:7001 Starting the Admin Server and initial configuration First thing to do is to start the WebLogic Admin server and review the initial IRM server settings. In this guide we are going to run the Admin server and IRM server in console windows, in another article I will discuss running these as background services. So for now, start a console and run the Admin server by doing the following. cd /oracle/middleware/user_projects/domains/irm_domain/ ./startWebLogic.sh Wait for the server to start, you are looking for the following line to be reported in the console window. <BEA-00360><Server started in RUNNING mode> First step is configuring the IRM service via Enterprise Manager. Now that the Admin server is running you can point a browser at http://irm.company.internal:7001/em. Login with the username and password you supplied when you created the domain. In Enterprise Manager the IRM service administrator is able to make server wide configuration. However finding where to access the pages with these settings can be a bit of a challenge. After logging in on the left you'll see a tree containing elements of the Enterprise Manager farm Farm_irm_domain. Open up Content Management, then Information Rights Management and finally select the IRM node. On the right then select the IRM menu item, navigate to the Administration section and now we have four options, for now, we are just going to look at General Settings. The image on the right proves that a picture is worth a thousand words (or 113 in this case). The General Settings page allows you to set the cryptographic algorithms used for protecting sealed content. Unless you have a burning need to increase the key lengths or you need to comply to a regulation or government mandate, AES192 is a good start. You can change this later on without worry. The most important setting here we need to make is the Server URL. In this blog article I go over why this URL is so important, basically every single piece of content you protect with Oracle IRM is going to have this URL embedded in it, so if it's wrong or unresolvable, then nobody can open the secured documents. Note that in our environment we have yet to do any SSL configuration of the service. If you intend to build a server without SSL, then use http as the protocol instead of https. But I would recommend using SSL and setting this up is described in the next article. I would also probably up the device count from 1 to 3. This means that any user can retrieve rights to access content onto 3 computers at any one time. The default of 1 doesn't really make sense in development, evaluation nor even production environments and my experience is that 3 is a better number. Next step is to create the keystore for the IRM server. When a classification (called a context) is created, Oracle IRM generates a unique set of symmetric keys which are used to secure the content itself. These keys are then encrypted with a set of "wrapper" asymmetric cryptography keys which are stored externally to the server either in a Java Key Store or a HSM. These keys need to be generated and the following shows my commands and the resulting output. I have greyed out the responses from the commands so you can see the input a little easier. [oracle@irmsrv ~]$ cd /oracle/middleware/wlserver_10.3/server/bin/ [oracle@irmsrv bin]$ ./setWLSEnv.sh CLASSPATH=/oracle/middleware/patch_wls1033/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/oracle/middleware/patch_ocp353/profiles/default/sys_manifest_classpath/weblogic_patch.jar:/usr/java/jdk1.6.0_18/lib/tools.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic_sp.jar:/oracle/middleware/wlserver_10.3/server/lib/weblogic.jar:/oracle/middleware/modules/features/weblogic.server.modules_10.3.3.0.jar:/oracle/middleware/wlserver_10.3/server/lib/webservices.jar:/oracle/middleware/modules/org.apache.ant_1.7.1/lib/ant-all.jar:/oracle/middleware/modules/net.sf.antcontrib_1.1.0.0_1-0b2/lib/ant-contrib.jar: PATH=/oracle/middleware/wlserver_10.3/server/bin:/oracle/middleware/modules/org.apache.ant_1.7.1/bin:/usr/java/jdk1.6.0_18/jre/bin:/usr/java/jdk1.6.0_18/bin:/usr/kerberos/bin:/usr/local/bin:/bin:/usr/bin:/home/oracle/bin Your environment has been set. [oracle@irmsrv bin]$ cd /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig/ [oracle@irmsrv fmwconfig]$ keytool -genkeypair -alias oracle.irm.wrap -keyalg RSA -keysize 2048 -keystore irm.jks Enter keystore password: Re-enter new password: What is your first and last name? [Unknown]: Simon Thorpe What is the name of your organizational unit? [Unknown]: Oracle What is the name of your organization? [Unknown]: Oracle What is the name of your City or Locality? [Unknown]: San Francisco What is the name of your State or Province? [Unknown]: CA What is the two-letter country code for this unit? [Unknown]: US Is CN=Simon Thorpe, OU=Oracle, O=Oracle, L=San Francisco, ST=CA, C=US correct? [no]: yes Enter key password for (RETURN if same as keystore password): At this point we now have an irm.jks in the directory /oracle/middleware/user_projects/domains/irm_domain/config/fmwconfig. The reason we store it here is this folder would be backed up as part of a domain backup. As with any cryptographic technology, DO NOT LOSE THESE KEYS OR THIS KEY STORE. Once you've sealed content against a context, the keys will be wrapped with these keys, lose these keys, and you can't get access to any secured content, pretty important. Now we've got the keys created, we need to go back to the IRM Enterprise Manager and set the location of the key store. Going back to the General Settings page in Enterprise Manager scroll down to Keystore Settings. Leave the type as JKS but change the location to; /oracle/Middleware/user_projects/domains/irm_domain/config/fmwconfig/irm.jks and hit Apply. The final step with regards to the key store is we need to tell the server what the password is for the Java Key Store so that it can be opened and the keys accessed. Once more fire up a console window and run these commands (again i've greyed out the clutter to see the commands easier). You will see dummy passed into the commands, this is because the command asks for a username, but in this instance we don't use one, hence the value dummy is passed and it isn't used. [oracle@irmsrv fmwconfig]$ cd /oracle/middleware/Oracle_IRM/common/bin/ [oracle@irmsrv bin]$ ./wlst.sh ... lots of settings fly by... Welcome to WebLogic Server Administration Scripting Shell Type help() for help on available commands wls:/offline>connect('weblogic','password','t3://irmsrv.us.oracle.com:7001') Connecting to t3://irmsrv.us.oracle.com:7001 with userid weblogic ... Successfully connected to Admin Server 'AdminServer' that belongs to domain 'irm_domain'. Warning: An insecure protocol was used to connect to the server. To ensure on-the-wire security, the SSL port or Admin port should be used instead. wls:/irm_domain/serverConfig>createCred("IRM","keystore:irm.jks","dummy","password") Location changed to domainRuntime tree. This is a read-only tree with DomainMBean as the root. For more help, use help(domainRuntime)wls:/irm_domain/serverConfig>createCred("IRM","key:irm.jks:oracle.irm.wrap","dummy","password") Already in Domain Runtime Tree wls:/irm_domain/serverConfig> At last we are now ready to fire up the IRM server itself. The domain creation created a managed server called IRM_server1 and we need to start this, use the following commands in a new console window. cd /oracle/middleware/user_projects/domains/irm_domain/bin/ ./startManagedWebLogic.sh IRM_server1 This will start up the server in the console, unlike the Admin server, you need to provide the username and password for the service to start. Enter in your weblogic username and password when prompted. You can change this behavior by putting the password into a boot.properties file, read more about this in the WebLogic Server documentation. Once running, wait until you see the line; <Notice><WebLogicServer><BEA-000360><Server started in RUNNING mode> At this point we can now login to the Oracle IRM Management Website at the URL. http://irm.company.internal:1600/irm_rights/ The server is just configured for HTTP at the moment, no SSL involved. Just want to ensure we can get a working system up and running. You should now see a login like the image on the right and you can now login using your weblogic username and password. The next article in this guide goes over adding SSL and now testing your server by actually adding a few users, sealing some content and opening this content as a user.

    Read the article

  • Maintain cookie session in Android

    - by datguywhowanders
    Okay, I have an android application that has a form in it, two EditText, a spinner, and a login button. The user selects the service from the spinner, types in their user name and password, and clicks login. The data is sent via POST, a response is returned, it's handled, a new webview is launched, the html string generated form the response is loaded, and I have the home page of whatever service the user selected. That's all well and good. Now, when the user clicks on a link, the login info can't be found, and the page asks the user to login again. My login session is being dropped somewhere, and I'm not certain how to pass the info from the class that controls the main part of my app to the class that just launches the webview activity. The on click handler from the form login button: private class FormOnClickListener implements View.OnClickListener { public void onClick(View v) { String actionURL, user, pwd, user_field, pwd_field; actionURL = "thePageURL"; user_field = "username"; //this changes based on selections in a spinner pwd_field = "password"; //this changes based on selections in a spinner user = "theUserLogin"; pwd = "theUserPassword"; List<NameValuePair> myList = new ArrayList<NameValuePair>(); myList.add(new BasicNameValuePair(user_field, user)); myList.add(new BasicNameValuePair(pwd_field, pwd)); HttpParams params = new BasicHttpParams(); DefaultHttpClient client = new DefaultHttpClient(params); HttpPost post = new HttpPost(actionURL); HttpResponse response = null; BasicResponseHandler myHandler = new BasicResponseHandler(); String endResult = null; try { post.setEntity(new UrlEncodedFormEntity(myList)); } catch (UnsupportedEncodingException e) { e.printStackTrace(); } try { response = client.execute(post); } catch (ClientProtocolException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } try { endResult = myHandler.handleResponse(response); } catch (HttpResponseException e) { e.printStackTrace(); } catch (IOException e) { e.printStackTrace(); } List cookies = client.getCookieStore().getCookies(); if (!cookies.isEmpty()) { for (int i = 0; i < cookies.size(); i++) { cookie = cookies.get(i); } } Intent myWebViewIntent = new Intent(MsidePortal.this, MyWebView.class); myWebViewIntent.putExtra("htmlString", endResult); myWebViewIntent.putExtra("actionURL", actionURL); startActivity(myWebViewIntent); } } And here is the webview class that handles the response display: public class MyWebView extends android.app.Activity{ private class MyWebViewClient extends WebViewClient { @Override public boolean shouldOverrideUrlLoading(WebView view, String url) { view.loadUrl(url); return true; } } @Override public void onCreate(Bundle savedInstanceState) { super.onCreate(savedInstanceState); setContentView(R.layout.web); MyWebViewClient myClient = new MyWebViewClient(); WebView webview = (WebView)findViewById(R.id.mainwebview); webview.getSettings().setBuiltInZoomControls(true); webview.getSettings().setJavaScriptEnabled(true); webview.setWebViewClient(myClient); Bundle extras = getIntent().getExtras(); if(extras != null) { // Get endResult String htmlString = extras.getString("htmlString"); String actionURL = extras.getString("actionURL"); Cookie sessionCookie = MsidePortal.cookie; CookieSyncManager.createInstance(this); CookieManager cookieManager = CookieManager.getInstance(); if (sessionCookie != null) { cookieManager.removeSessionCookie(); String cookieString = sessionCookie.getName() + "=" + sessionCookie.getValue() + "; domain=" + sessionCookie.getDomain(); cookieManager.setCookie(actionURL, cookieString); CookieSyncManager.getInstance().sync(); } webview.loadDataWithBaseURL(actionURL, htmlString, "text/html", "utf-8", actionURL); } } } I've had mixed success implementing that cookie solution. It seems to work for one service I log into that I know keeps the cookies on the server (old, archaic, but it works and they don't want to change it.) The service I'm attempting now requires the user to keep cookies on their local machine, and it does not work with this setup. Any suggestions?

    Read the article

  • SQL Server 2005: Improving performance for thousands or Insert requests. logout-login time= 120ms.

    - by Rad
    Can somebody shed some lights on how SQL Server 2005 deals with may request issued by a client using ADO.NET 2.0. Below is the shortend output of SQL Trace. I can see that connection pooling is working (I believe there is only one connection being pooled). What is not clear to me is why we have so many sp_reset_connection calls i.e a series of: Audit Login, SQL:BatchStarting, RPC:Starting and Audit Logout for each loop in for loop below. I can see that there is constant switching between tempdb and master database which leads me to conclude that we lost the context when next connection is created by fetching it from the pool based on ConectionString argument. I can see that every 15ms I can get 100-200 login/logout per second (reported at the same time by Profiler). The after 15ms I have again a series fo 100-200 login/logout per second. I need clarification on how this might affect much complex insert queries in production environment. I use Enterprise Library 2006, the code is compiled with VS 2005 and it is a console application that parses a flat file with 10 of thousand of rows grouping parent-child rows, runs on an application server and runs 2 stored procedure on a remote SQL Server 2005 inserting a parent record, retrieves Identity value and using it calls the second stored procedure 1, 2 or multiple times (sometimes several thousands) inserting child records. The child table has close to 10 million records with 5-10 indexes some of them being covering non-clustered. There is a pretty complex Insert trigger that copies inserted detail record to an archive table. All in all I only have 7 inserts per second which means it can take 2-4 hours for 50 thousand records. When I run Profiler on the test server (that is almost equivalent with production server) I can see that there is about 120ms between Audit Logout and Audit Login trace entries which almost give me chance to insert about 8 records. So my question is if there is some way to improve inserting of records since the company loads 100 thousands of records and does daily planning and has SLA to fulfill client request coming as flat file orders and some big files 10 thousands have to be processed(imported quickly). 4 hours to import 60 thousands should be reduced to 30 minutes. I was thinking to use BatchSize of DataAdapter to send multiple stored procedure calls, SQL Bulk inserts to batch multiple inserts from DataReader or DataTable, SSIS fast load. But I don't know how to properly analyze re-indexing and stats population and maybe this has to take some time to finish. What is worse is that the company uses the biggest table for reporting and other online processing and indexes cannot be dropped. I manage transaction manually by setting a field to a value and do an transactional update changing that value to a new value that other applications are using to get committed rows. Please advise how to approach this problem. For now I am trying to have a staging tables with minimal logging in a separate database and no indexes and I will try to do batched (massive) parent child inserts. I believe Production DB has simple recovery model, but it could be full recovery. If DB user that is being used by my .NET console application has bulkadmin role does it mean its bulk inserts are minimally logged. I understand that when a table has clustered and many non-clustered indexes that inserts are still logged for each row. Connection pooling is working, but with many login/logouts. Why? for (int i = 1; i <= 10000; i++){ using (SqlConnection conn = new SqlConnection("server=(local);database=master;integrated security=sspi;")) {conn.Open(); using (SqlCommand cmd = conn.CreateCommand()){ cmd.CommandText = "use tempdb"; cmd.ExecuteNonQuery();}}} SQL Server Profiler trace: Audit Login master 2010-01-13 23:18:45.337 1 - Nonpooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.337 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.337 Audit Logout tempdb 2010-01-13 23:18:45.337 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled Audit Login -- network protocol master 2010-01-13 23:18:45.383 2 - Pooled SQL:BatchStarting use tempdb master 2010-01-13 23:18:45.383 RPC:Starting exec sp_reset_conn tempdb 2010-01-13 23:18:45.383 Audit Logout tempdb 2010-01-13 23:18:45.383 2 - Pooled

    Read the article

  • CodePlex Daily Summary for Thursday, March 01, 2012

    CodePlex Daily Summary for Thursday, March 01, 2012Popular ReleasesMetodología General Ajustada - MGA: 01.09.08: Cambios John: Cambios en el MDI: Habilitación del menú e ícono de Imprimir. Deshabilitación de menú Ayuda y opciones de Importar y Exportar del menú Proyectos temporalmente. Integración con código de Crystal Report. Validaciones con Try-Catch al generar los reportes, personalización de los formularios en estilos y botones y validación de selección de tipo de reporte. Creación de instalador con TODOS los cambios y la creación de las carpetas asociadas a los RPT.WatchersNET CKEditor™ Provider for DotNetNuke®: CKEditor Provider 1.14.01: Whats NewAdded New Plugin "Ventrian News Articles Link Selector" to select an Article Link from the News Article Module (This Plugin is not visible by default in your Toolbar, you need to manually add the 'newsarticleslinks' to your toolbarset) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEditorNewsArticlesLinks.png File-Browser: Added Paging to the Files List. You can define the Page Size in the Options (Default Value: 20) http://www.watchersnet.de/Portals/0/screenshots/dnn/CKEdito...MyRouter (Virtual WiFi Router): MyRouter 1.0 (Beta): A friendlier User Interface. A logger file to catch exceptions so you may send it to use to improve and fix any bugs that may occur. A feedback form because we always love hearing what you guy's think of MyRouter. Check for update menu item for you to stay up to date will the latest changes. Facebook fan page so you may spread the word and share MyRouter with friends and family And Many other exciting features were sure your going to love!WPF Sound Visualization Library: WPF SVL 0.3 (Source, Binaries, Examples, Help): Version 0.3 of WPFSVL. This includes three new controls: an equalizer, a digital clock, and a time editor.Thai Flood Watch: Thai Flood Watch - Source: non commercial use only ** This project supported by Department of Computer Science KhonKaen University Thailand.ZXing.Net: ZXing.Net 0.4.0.0: sync with rev. 2196 of the java version important fix for RGBLuminanceSource generating barcode bitmaps Windows Phone demo client (only tested with emulator, because I don't have a Windows Phone) Barcode generation support for Windows Forms demo client Webcam support for Windows Forms demo clientOrchard Project: Orchard 1.4: Please read our release notes for Orchard 1.4: http://docs.orchardproject.net/Documentation/Orchard-1-4-Release-Notes.NET Assembly Information: Assembly Information 2.1.0.1: - Fixed the issue in which AnyCPU binaries were shown as 32bit - Added support to show the errors in-case if some dlls failed to load.FluentData -Micro ORM with a fluent API that makes it simple to query a database: FluentData version 1.2: New features: - QueryValues method - Added support for automapping to enumerations (both int and string are supported). Fixed 2 reported issues.NetSqlAzMan - .NET SQL Authorization Manager: 3.6.0.15: 3.6.0.15 28-Feb-2012 • Fix: The communication object, System.ServiceModel.Channels.ServiceChannel, cannot be used for communication because it is in the Faulted state. Work Item 10435: http://netsqlazman.codeplex.com/workitem/10435 • Fix: Made StorageCache thread safe. Thanks to tangrl. • Fix: Members property of SqlAzManApplicationGroup is not functioning. Thanks to tangrl. Work Item 10267: http://netsqlazman.codeplex.com/workitem/10267 • Fix: Indexer are making database calls. Thanks to t...SCCM Client Actions Tool: Client Actions Tool v1.1: SCCM Client Actions Tool v1.1 is the latest version. It comes with following changes since last version: Added stop button to stop the ongoing process. Added action "Query update status". Added option "saveOnlineComputers" in config.ini to enable saving list of online computers from last session. Default value for "LatestClientVersion" set to SP2 R3 (4.00.6487.2157). Wuauserv service manual startup mode is considered healthy on Windows 7. Errors are now suppressed in checkReleases...Kinect PowerPoint Control: Kinect PowerPoint Control v1.1: Updated for Kinect SDK 1.0.SharpCompress - a fully native C# library for RAR, 7Zip, Zip, Tar, GZip, BZip2: SharpCompress 0.8: API Updates: SOLID Extract Method for Archives (7Zip and RAR). ExtractAllEntries method on Archive classes will extract archives as a streaming file. This can offer better 7Zip extraction performance if any of the entries are solid. The IsSolid method on 7Zip archives will return true if any are solid. Removed IExtractionListener was removed in favor of events. Unit tests show example. Bug fixes: PPMd passes tests plus other fixes (Thanks Pavel) Zip used to always write a Post Descri...Social Network Importer for NodeXL: SocialNetImporter(v.1.3): This new version includes: - Download new networks for Facebook fan pages. - New options for downloading more posts - Bug fixes To use the new graph data provider, do the following: Unzip the Zip file into the "PlugIns" folder that can be found in the NodeXL installation folder (i.e "C:\Program Files\Social Media Research Foundation\NodeXL Excel Template\PlugIns") Open NodeXL template and you can access the new importer from the "Import" menuASP.NET REST Services Framework: Release 1.1 - Standard version: Beginning from v1.1 the REST-services Framework is compatible with ASP.NET Routing model as well with CRUD (Create, Read, Update, and Delete) principle. These two are often important when building REST API functionality within your application. It also includes ability to apply Filters to a class to target all WebRest methods, as well as some performance enhancements. New version includes Metadata Explorer providing ability exploring the existing services that becomes essential as the number ...SQL Live Monitor: SQL Live Monitor 1.31: A quick fix to make it this version work with SQL 2012. Version 2 already has 2012 working, but am still developing the UI in version 2, so this is just an interim fix to allow user to monitor SQL 2012.Content Slider Module for DotNetNuke: 01.02.00: This release has the following updates and new features: Feature: One-Click Enabling of Pager Setting Feature: Cache Sliders for Performance Feature: Configurable Cache Setting Enhancement: Transitions can be Selected Bug: Secure Folder Images not Viewable Bug: Sliders Disappear on Postback Bug: Remote Images Cause Error Bug: Deleted Images Cause Error System Requirements DotNetNuke v06.00.00 or newer .Net Framework v3.5 SP1 or newer SQL Server 2005 or newerImage Resizer for Windows: Image Resizer 3 Preview 3: Here is yet another iteration toward what will eventually become Image Resizer 3. This release is stable. However, I'm calling it a preview since there are still many features I'd still like to add before calling it complete. Updated on February 28 to fix an issue with installing on multi-user machines. As usual, here is my progress report. Done Preview 3 Fix: 3206 3076 3077 5688 Fix: 7420 Fix: 7527 Fix: 7576 7612 Preview 2 6308 6309 Fix: 7339 Fix: 7357 Preview 1 UI...Finestra Virtual Desktops: 2.5.4500: This is a bug fix release for version 2.5. It fixes several things and adds a couple of minor features. See the 2.5 release notes for more information on the major new features in that version. Important - If Finestra crashes on startup for you, you must install the Visual C++ 2010 runtime from http://www.microsoft.com/download/en/details.aspx?id=5555. Fixes a bug with window animations not refreshing the screen on XP and with DWM off Fixes a bug with with crashing on XP due to a bug in t...Media Companion: MC 3.432b Release: General Now remembers window location. Catching a few more exceptions when an image is blank TV A couple of UI tweaks Movies Fixed the actor name displaying HTML Fixed crash when using Save files as "movie.nfo", "movie.tbn", & "fanart.jpg" New CSV template for HTML output function Added <createdate> tag for HTML output A couple of UI tweaks Known Issues Multiepisodes are not handled correctly in MC. The created nfo is valid, but they are not displayed in MC correctly & saving the...New Projectsabac: abac cn websiteAION Launcher: simple aion launcher...just edit the background image of your choosing inside the code and other things such as the links for the buttons and the ip adress and port of the serverAXTFSTool: Dynamics AX tool that connects to your project's TFS and lists the objects your colleagues have changed. Written in C#, still under development and improvements. Useful for team leaders, deployment managers, etc.cookieTopo: Topo map viewerCrmFetchKit.js: Simple Library at allows the execution of fetchxml queries via JavaScript for Dynamics CRM 2011 (using the new WCF endpoints). Like the CrmRestKit this framework uses the promise/A capacities of jQuery. The code and the idea for this framework bases on the CrmServiceToolkit (http://crmtoolkit.codeplex.com/) developed by Daniel Cai. cy univerX engine: ????????DNSAPI.NET: A common API for managing DNS servers on Windows. This project is based on the work I started back in 2002 when I needed to create a web front-end for Windows' DNS server using the .Net framework. The plan is to expand on the project and include support for the BIND server on Windows too. ego.net: ego.netfdTFS: Team Foundation Server Source Control Plugin for FlashDevelopGeoWPS: GeoWPS is an implementation of the OGC WPS. It will be developed in C#. IThink: A new project.King Garden: Boy King's .net practical projects.King Garret: Boy King's .net learning projects.LottoCheck: Follow LottoNot-Terraria: This is a like terraria game but NOT terrariaPassword Protector: Password Protector SharePoint 2010 BlobCache Manager: Manage your web application's blobcache settings directly in the central administration.SharePoint 2010 SilverLight Multiple File Uploader: SharePoint 2010 SilverLight Multiple File Uploader for Documents Libraries with MetaData.Sharepoint Tool Collection: I want to Integrate Various Utilities of Sharepoint at one place. It is for easy working of user or developer. Ex-1. A utility which takes some params & csv file and upload 100s of items on the sharepoint list easily. Ex-2 A utility to upload documents in a library. etc.SQLCLR Cmd Exec Framework Example: For users of MS SQL Server, xp_cmdshell is a utility that we usually want to have disabled. However there are still cases where calling a command line is needed. This project provides an framework/example to make command line calls. It is not meant as an xp_cmdshell replacement but as a workaround.Symmetric Designs Python 3.2: Symmetric Designs for Python 3.2 helps graphical artists to design and develop their own designs freestyle. It uses the pygame module for Python 3.2. It can also be analysed in order to get a grasp of graphics programming in Python.Terminsoft open CLR libraries: Terminsoft open CLR libraries. The first is Terminsoft.Intervals, intended for modeling the sets of intervals with elements, the comparison operation is defined for. The second is Terminsoft.Syntax, intended for text parsing and transformation and built upon regular expressions.Thai Flood Watch: Thai Flood Watch provides useful information, up-to-date and visual access to the major canal in Bangkok, Thailand using data from department of drainage and sewerage. Easily monitor river and canal flow information in Bangkok area, right from your hand.TheNerd: Sample video game source code. Using Sunburn.Unity.WebAPI: A library that allows simple Integration of Microsoft's Unity IoC container with ASP.NET's WebAPI. This project includes a bespoke DependencyResolver that creates a child container per HTTP request and disposes of all registered IDisposable instances at the end of the request.Wholemy.RemoteTouch: The project is a remote touch-sensitive keyboard with a customizable interface which allows to supplement control of another computer, regardless of the wires. For example, if you have not so fast Tablet PC - a client and a fast desktop computer - the server using the network.WindowPlace: WindowPlace makes it possible to save Window positions and sizes to a profile. Switching between profiles will effortlessly move and resize your windows. Help improve productivity - especially for multi-monitor systems. Developed in C# using WPF and a few Windows API calls in the background. WP Error Manager (Devv.Core.WPErrorManager): Library to log, handle and report errors on Windows Phone 7 apps. Fully customizable and extremely easy to implement. Works with any WP7 app. Tested with the emulator, Nokia Lumia 800 and Samsung Focus Flash.WPMatic: Windows Phone7 App to manage Homematic (eQ-3) Devices. The App is like the Homematic Central Configuration Unit (CCU) in German.www.Nabaza.com Freeware and Ebooks: www.Nabaza.com Freeware and Ebooks by William R. NabazaZap: Zap is a light weight .NET communication framework. It is designed for programs running in local area network. Zap provides code generation tool that enables user to call remote methods, add/remote event listener to remote objects, while hides the lower details.

    Read the article

  • jQuery and Windows Azure

    - by Stephen Walther
    The goal of this blog entry is to describe how you can host a simple Ajax application created with jQuery in the Windows Azure cloud. In this blog entry, I make no assumptions. I assume that you have never used Windows Azure and I am going to walk through the steps required to host the application in the cloud in agonizing detail. Our application will consist of a single HTML page and a single service. The HTML page will contain jQuery code that invokes the service to retrieve and display set of records. There are five steps that you must complete to host the jQuery application: Sign up for Windows Azure Create a Hosted Service Install the Windows Azure Tools for Visual Studio Create a Windows Azure Cloud Service Deploy the Cloud Service Sign Up for Windows Azure Go to http://www.microsoft.com/windowsazure/ and click the Sign up Now button. Select one of the offers. I selected the Introductory Special offer because it is free and I just wanted to experiment with Windows Azure for the purposes of this blog entry.     To sign up, you will need a Windows Live ID and you will need to enter a credit card number. After you finish the sign up process, you will receive an email that explains how to activate your account. Accessing the Developer Portal After you create your account and your account is activated, you can access the Windows Azure developer portal by visiting the following URL: http://windows.azure.com/ When you first visit the developer portal, you will see the one project that you created when you set up your Windows Azure account (In a fit of creativity, I named my project StephenWalther).     Creating a New Windows Azure Hosted Service Before you can host an application in the cloud, you must first add a hosted service to your project. Click your project on the summary page and click the New Service link. You are presented with the option of creating either a new Storage Account or a new Hosted Services.     Because we have code that we want to run in the cloud – the WCF Service -- we want to select the Hosted Services option. After you select this option, you must provide a name and description for your service. This information is used on the developer portal so you can distinguish your services.     When you create a new hosted service, you must enter a unique name for your service (I selected jQueryApp) and you must select a region for this service (I selected Anywhere US). Click the Create button to create the new hosted service.   Install the Windows Azure Tools for Visual Studio We’ll use Visual Studio to create our jQuery project. Before you can use Visual Studio with Windows Azure, you must first install the Windows Azure Tools for Visual Studio. Go to http://www.microsoft.com/windowsazure/ and click the Get Tools and SDK button. The Windows Azure Tools for Visual Studio works with both Visual Studio 2008 and Visual Studio 2010.   Installation of the Windows Azure Tools for Visual Studio is painless. You just need to check some agreement checkboxes and click the Next button a few times and installation will begin:   Creating a Windows Azure Application After you install the Windows Azure Tools for Visual Studio, you can choose to create a Windows Azure Cloud Service by selecting the menu option File, New Project and selecting the Windows Azure Cloud Service project template. I named my new Cloud Service with the name jQueryApp.     Next, you need to select the type of Cloud Service project that you want to create from the New Cloud Service Project dialog.   I selected the C# ASP.NET Web Role option. Alternatively, I could have picked the ASP.NET MVC 2 Web Role option if I wanted to use jQuery with ASP.NET MVC or even the CGI Web Role option if I wanted to use jQuery with PHP. After you complete these steps, you end up with two projects in your Visual Studio solution. The project named WebRole1 represents your ASP.NET application and we will use this project to create our jQuery application. Creating the jQuery Application in the Cloud We are now ready to create the jQuery application. We’ll create a super simple application that displays a list of records retrieved from a WCF service (hosted in the cloud). Create a new page in the WebRole1 project named Default.htm and add the following code: <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> var products = [ {name:"Milk", price:4.55}, {name:"Yogurt", price:2.99}, {name:"Steak", price:23.44} ]; $("#productTemplate").render(products).appendTo("#productContainer"); </script> </body> </html> The jQuery code in this page simply displays a list of products by using a template. I am using a jQuery template to format each product. You can learn more about using jQuery templates by reading the following blog entry by Scott Guthrie: http://weblogs.asp.net/scottgu/archive/2010/05/07/jquery-templates-and-data-linking-and-microsoft-contributing-to-jquery.aspx You can test whether the Default.htm page is working correctly by running your application (hit the F5 key). The first time that you run your application, a database is set up on your local machine to simulate cloud storage. You will see the following dialog: If the Default.htm page works as expected, you should see the list of three products: Adding an Ajax-Enabled WCF Service In the previous section, we created a simple jQuery application that displays an array by using a template. The application is a little too simple because the data is static. In this section, we’ll modify the page so that the data is retrieved from a WCF service instead of an array. First, we need to add a new Ajax-enabled WCF Service to the WebRole1 project. Select the menu option Project, Add New Item and select the Ajax-enabled WCF Service project item. Name the new service ProductService.svc. Modify the service so that it returns a static collection of products. The final code for the ProductService.svc should look like this: using System.Collections.Generic; using System.ServiceModel; using System.ServiceModel.Activation; namespace WebRole1 { public class Product { public string name { get; set; } public decimal price { get; set; } } [ServiceContract(Namespace = "")] [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)] public class ProductService { [OperationContract] public IList<Product> SelectProducts() { var products = new List<Product>(); products.Add(new Product {name="Milk", price=4.55m} ); products.Add(new Product { name = "Yogurt", price = 2.99m }); products.Add(new Product { name = "Steak", price = 23.44m }); return products; } } }   In real life, you would want to retrieve the list of products from storage instead of a static array. We are being lazy here. Next you need to modify the Default.htm page to use the ProductService.svc. The jQuery script in the following updated Default.htm page makes an Ajax call to the WCF service. The data retrieved from the ProductService.svc is displayed in the client template. <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"> <html xmlns="http://www.w3.org/1999/xhtml"> <head> <title>Products</title> <style type="text/css"> #productContainer div { border:solid 1px black; padding:5px; margin:5px; } </style> </head> <body> <h1>Product Catalog</h1> <div id="productContainer"></div> <script id="productTemplate" type="text/html"> <div> Name: {{= name }} <br /> Price: {{= price }} </div> </script> <script src="Scripts/jquery-1.4.2.js" type="text/javascript"></script> <script src="Scripts/jquery.tmpl.js" type="text/javascript"></script> <script type="text/javascript"> $.post("ProductService.svc/SelectProducts", function (results) { var products = results["d"]; $("#productTemplate").render(products).appendTo("#productContainer"); }); </script> </body> </html>   Deploying the jQuery Application to the Cloud Now that we have created our jQuery application, we are ready to deploy our application to the cloud so that the whole world can use it. Right-click your jQueryApp project in the Solution Explorer window and select the Publish menu option. When you select publish, your application and your application configuration information is packaged up into two files named jQueryApp.cspkg and ServiceConfiguration.cscfg. Visual Studio opens the directory that contains the two files. In order to deploy these files to the Windows Azure cloud, you must upload these files yourself. Return to the Windows Azure Developers Portal at the following address: http://windows.azure.com/ Select your project and select the jQueryApp service. You will see a mysterious cube. Click the Deploy button to upload your application.   Next, you need to browse to the location on your hard drive where the jQueryApp project was published and select both the packaged application and the packaged application configuration file. Supply the deployment with a name and click the Deploy button.     While your application is in the process of being deployed, you can view a progress bar.     Running the jQuery Application in the Cloud Finally, you can run your jQuery application in the cloud by clicking the Run button.   It might take several minutes for your application to initialize (go grab a coffee). After WebRole1 finishes initializing, you can navigate to the following URL to view your live jQuery application in the cloud: http://jqueryapp.cloudapp.net/default.htm The page is hosted on the Windows Azure cloud and the WCF service executes every time that you request the page to retrieve the list of products. Summary Because we started from scratch, we needed to complete several steps to create and deploy our jQuery application to the Windows Azure cloud. We needed to create a Windows Azure account, create a hosted service, install the Windows Azure Tools for Visual Studio, create the jQuery application, and deploy it to the cloud. Now that we have finished this process once, modifying our existing cloud application or creating a new cloud application is easy. jQuery and Windows Azure work nicely together. We can take advantage of jQuery to build applications that run in the browser and we can take advantage of Windows Azure to host the backend services required by our jQuery application. The big benefit of Windows Azure is that it enables us to scale. If, all of the sudden, our jQuery application explodes in popularity, Windows Azure enables us to easily scale up to meet the demand. We can handle anything that the Internet might throw at us.

    Read the article

  • pre-commit hook in svn: could not be translated from the native locale to UTF-8

    - by Alexandre Moraes
    Hi everybody, I have a problem with my pre-commit hook. This hook test if a file is locked when the user commits. When a bad condition happens, it should output that the another user is locking this file or if nobody is locking, it should show "you are not locking this file message (file´s name)". The error happens when the file´s name has some latin character like "ç" and tortoise show me this in the output. Commit failed (details follow): Commit blocked by pre-commit hook (exit code 1) with output: [Erro output could not be translated from the native locale to UTF-8.] Do you know how can I solve this? Thanks, Alexandre My shell script is here: #!/bin/sh REPOS="$1" TXN="$2" export LANG="en_US.UTF-8" /app/svn/hooks/ensure-has-need-lock.pl "$REPOS" "$TXN" if [ $? -ne 0 ]; then exit 1; fi exit 0 And my perl is here: !/usr/bin/env perl #Turn on warnings the best way depending on the Perl version. BEGIN { if ( $] >= 5.006_000) { require warnings; import warnings; } else { $^W = 1; } } use strict; use Carp; &usage unless @ARGV == 2; my $repos = shift; my $txn = shift; my $svnlook = "/usr/local/bin/svnlook"; my $user; my $ok = 1; foreach my $program ($svnlook) { if (-e $program) { unless (-x $program) { warn "$0: required program $program' is not executable, ", "edit $0.\n"; $ok = 0; } } else { warn "$0: required program $program' does not exist, edit $0.\n"; $ok = 0; } } exit 1 unless $ok; unless (-e $repos){ &usage("$0: repository directory $repos' does not exist."); } unless (-d $repos){ &usage("$0: repository directory $repos' is not a directory."); } foreach my $user_tmp (&read_from_process($svnlook, 'author', $repos, '-t', $txn)) { $user = $user_tmp; } my @errors; foreach my $transaction (&read_from_process($svnlook, 'changed', $repos, '-t', $txn)){ if ($transaction =~ /^U. (.*[^\/])$/){ my $file = $1; my $err = 0; foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ $err = 1; if($locks=~ /Owner: (.*)/){ if($1 != $user){ push @errors, "$file : You are not locking this file!"; } } } if($err==0){ push @errors, "$file : You are not locking this file!"; } } elsif($transaction =~ /^D. (.*[^\/])$/){ my $file = $1; my $tchan = &read_from_process($svnlook, 'lock', $repos, $file); foreach my $locks (&read_from_process($svnlook, 'lock', $repos, $file)){ push @errors, "$1 : cannot delete locked Files"; } } elsif($transaction =~ /^A. (.*[^\/])$/){ my $needs_lock; my $path = $1; foreach my $prop (&read_from_process($svnlook, 'proplist', $repos, '-t', $txn, '--verbose', $path)){ if ($prop =~ /^\s*svn:needs-lock : (\S+)/){ $needs_lock = $1; } } if (not $needs_lock){ push @errors, "$path : svn:needs-lock is not set. Pleas ask TCC for support."; } } } if (@errors) { warn "$0:\n\n", join("\n", @errors), "\n\n"; exit 1; } else { exit 0; } sub usage { warn "@_\n" if @_; die "usage: $0 REPOS TXN-NAME\n"; } sub safe_read_from_pipe { unless (@_) { croak "$0: safe_read_from_pipe passed no arguments.\n"; } print "Running @_\n"; my $pid = open(SAFE_READ, '-|'); unless (defined $pid) { die "$0: cannot fork: $!\n"; } unless ($pid) { open(STDERR, ">&STDOUT") or die "$0: cannot dup STDOUT: $!\n"; exec(@_) or die "$0: cannot exec @_': $!\n"; } my @output; while (<SAFE_READ>) { chomp; push(@output, $_); } close(SAFE_READ); my $result = $?; my $exit = $result >> 8; my $signal = $result & 127; my $cd = $result & 128 ? "with core dump" : ""; if ($signal or $cd) { warn "$0: pipe from @_' failed $cd: exit=$exit signal=$signal\n"; } if (wantarray) { return ($result, @output); } else { return $result; } } sub read_from_process { unless (@_) { croak "$0: read_from_process passed no arguments.\n"; } my ($status, @output) = &safe_read_from_pipe(@_); if ($status) { if (@output) { die "$0: @_' failed with this output:\n", join("\n", @output), "\n"; } else { die "$0: @_' failed with no output.\n"; } } else { return @output; } }

    Read the article

  • Rails 3 / RVM - Acts_as_list compiled locally - Why Can't Ruby See This Gem?

    - by rabbit on rails
    I cannot figure out why rails/ruby cannot see this gem, despite each telling me that the gem is visible. I compiled this gem locally from a github branch since the main version seems to be broken in Rails 3. Or perhaps I am missing something else entirely. Ovid:lightserve dlipa$ gem list *** LOCAL GEMS *** .. acts_as_list (0.2.1) .. And Ovid:lightserve dlipa$ cat Gemfile ... gem "acts_as_list", "0.2.1" ... And Ovid:lightserve dlipa$ bundle install ... Using acts_as_list (0.2.1) Your bundle is updated! Use `bundle show [gemname]` to see where a bundled gem is installed But Ovid:lightserve dlipa$ r c RubyGems Environment: - RUBYGEMS VERSION: 1.6.1 - RUBY VERSION: 1.9.2 (2011-02-18 patchlevel 180) [x86_64-darwin10.6.0] - INSTALLATION DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - RUBY EXECUTABLE: /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/ruby - EXECUTABLE DIRECTORY: /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bin - RUBYGEMS PLATFORMS: - ruby - x86_64-darwin-10 - GEM PATHS: - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180 - /Users/dlipa/.rvm/gems/ruby-1.9.2-p180@global - GEM CONFIGURATION: - :update_sources => true - :verbose => true - :benchmark => false - :backtrace => false - :bulk_threshold => 1000 - :sources => ["http://rubygems.org/", "http://gems.github.com"] - REMOTE SOURCES: - http://rubygems.org/ - http://gems.github.com Loading development environment (Rails 3.0.5) ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `block in require' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `block in load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:596:in `new_constants_in' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:225:in `load_dependency' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/activesupport-3.0.5/lib/active_support/dependencies.rb:239:in `require' from (irb):1 from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:44:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands/console.rb:8:in `start' from /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/gems/railties-3.0.5/lib/rails/commands.rb:23:in `<top (required)>' from script/rails:6:in `require' from script/rails:6:in `<main>' ruby-1.9.2-p180 :002 > And Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/lib/ruby/site_ruby/1.9.1/rubygems/custom_require.rb:36:in `require' from (irb):1 from /Users/dlipa/.rvm/rubies/ruby-1.9.2-p180/bin/irb:16:in `<main>' ruby-1.9.2-p180 :002 > Can anyone explain why this might be happening? I'd really appreciate it! ** UPDATE -- Response to Andrew Marshall's suggestion** I changed Gemfile to read the gem directly from git, but it did not resolve the problem. Does this mean that there is a problem with this gem? The error message is not very helpful ;-) Removed: Ovid:lightserve dlipa$ bundle show acts_as_list Could not find gem 'acts_as_list' in the current bundle. Then added back via: gem "acts_as_list", :git => "git://github.com/vpereira/acts_as_list.git" Ovid:lightserve dlipa$ bundle install Updating git://github.com/vpereira/acts_as_list.git ... Same problem even though bundle show matches the commit on that page: Ovid:lightserve dlipa$ bundle show acts_as_list /Users/dlipa/.rvm/gems/ruby-1.9.2-p180/bundler/gems/acts_as_list-4cb76a8b198c Ovid:lightserve dlipa$ irb ruby-1.9.2-p180 :001 > require 'acts_as_list' LoadError: no such file to load -- acts_as_list from /Users/dlipa/.rvm/rubies/ruby-1.9.2-.. I just looked in the gem and it appears there is no file called 'acts_as_list' in the gem. So it appears to be idiosyncratic, albeit poorly reported by Rails/Ruby. The API appears to have changed to: ruby-1.9.2-p180 :003 > require 'active_record/acts/list' => nil ruby-1.9.2-p180 :004 > ActiveRecord::Acts::List => ActiveRecord::Acts::List

    Read the article

  • Xcode is not calling asp.net webservice

    - by vaibhav
    I have oracle database and using webservice i want to insert some records in to it So i created webservice in asp.net as follows public bool PickPill(string Me_id, string Mem_device_id, string Test_datetime, string Creation_id, string PillBayNo) { string Hed_seq_id = Hed_seq_Id(); bool ResultHED = InsHealthEData(Hed_seq_id, Mem_device_id, Me_id, Test_datetime, Creation_id); bool ResultHET = InsHealthETest(Hed_seq_id, PillBayNo, Test_datetime, Creation_id); if (ResultHED == ResultHET == true) return true; else return false; } this function did all data insertion trick for me i tested this service on the local mechine with ip address http:72.44.151.178/PickPillService.asmx then, I see an example on how to attach asp.net web service to iphone apps http://www.devx.com/wireless/Article/43209/0/page/4 then i created simillar code in xcode which has 2 files ConsumePillServiceViewController.m ConsumePillServiceViewController.h file Now, Using Designer of xcode i created 5 textboxes(Me_id,Mem_device_id,Test_datetime,Creation_id,PillBayNo) with all parameters hardcode as our service demands then modify my ConsumePillServiceViewController.h file as follows @interface ConsumePillServiceViewController : UIViewController { //---outlets--- IBOutlet UITextField *Me_id; IBOutlet UITextField *Mem_device_id; IBOutlet UITextField *Test_datetime; IBOutlet UITextField *Creation_id; IBOutlet UITextField *PillBayNo; //---web service access--- NSMutableData *webData; NSMutableString *soapResults; NSURLConnection *conn; } @property (nonatomic, retain) UITextField *Me_id; @property (nonatomic, retain) UITextField *Mem_device_id; @property (nonatomic, retain) UITextField *Test_datetime; @property (nonatomic, retain) UITextField *Creation_id; @property (nonatomic, retain) UITextField *PillBayNo; - (IBAction)buttonClicked:(id)sender; @end and ConsumePillServiceViewController.m as follows import "ConsumePillServiceViewController.h" @implementation ConsumePillServiceViewController @synthesize Me_id; @synthesize Mem_device_id; @synthesize Test_datetime; @synthesize Creation_id; @synthesize PillBayNo; (IBAction)buttonClicked:(id)sender { NSString *soapMsg = @"" "" "" ""; NSString *smMe_id= [soapMsg stringByAppendingString: [NSString stringWithFormat: @"%@",Me_id.text]]; NSString *smMem_device_id= [smMe_id stringByAppendingString: [NSString stringWithFormat: @"%@",Mem_device_id.text]]; NSString *smTest_datetime= [smMem_device_id stringByAppendingString: [NSString stringWithFormat: @"%@",Test_datetime.text]]; NSString *smCreation_id= [smTest_datetime stringByAppendingString: [NSString stringWithFormat: @"%@",Creation_id.text]]; NSString *smPillBayNo= [smCreation_id stringByAppendingString: [NSString stringWithFormat: @"%@",PillBayNo.text]]; NSString *smRestMsg= [smPillBayNo stringByAppendingString: @"" "" ""]; soapMsg=smRestMsg; //---print it to the Debugger Console for verification--- NSLog(soapMsg); NSURL *url = [NSURL URLWithString: //create a URL load request object using instances : @"http://72.44.151.178/PickPillService.asmx"];//of the NSMutableURLRequest and NSURL objects NSMutableURLRequest *req = [NSMutableURLRequest requestWithURL:url]; //opulate the request object with the various headers, such as Content-Type, SOAPAction, and Content-Length. //You also set the HTTP method and HTTP body NSString *msgLength = [NSString stringWithFormat:@"%d", [soapMsg length]]; [req addValue:@"text/xml; charset=utf-8" forHTTPHeaderField:@"Content-Type"]; [req addValue:@"http://tempuri.org/PickPill" forHTTPHeaderField:@"SOAPAction"]; [req addValue:msgLength forHTTPHeaderField:@"Content-Length"]; //---set the HTTP method and body--- [req setHTTPMethod:@"POST"]; [req setHTTPBody: [soapMsg dataUsingEncoding:NSUTF8StringEncoding]]; conn = [[NSURLConnection alloc] initWithRequest:req delegate:self]; //establish the connection with the web service, if (conn) { //you use the NSURLConnection class together with the request object just created webData = [[NSMutableData data] retain];//webData object use to receive incoming data from the web service } }//End of button clicked event -(void) connection:(NSURLConnection *) connection //Recive response didReceiveResponse:(NSURLResponse *) response { [webData setLength: 0]; } -(void) connection:(NSURLConnection *) connection //Repeative call method and append data to webData didReceiveData:(NSData *) data { [webData appendData:data]; } -(void) connection:(NSURLConnection *) connection//If error occure error should be displayed didFailWithError:(NSError *) error { [webData release]; [connection release]; } -(void) connectionDidFinishLoading:(NSURLConnection *) connection { NSLog(@"DONE. Received Bytes: %d", [webData length]); NSString *theXML = [[NSString alloc] initWithBytes: [webData mutableBytes] length:[webData length] encoding:NSUTF8StringEncoding]; //---shows the XML--- NSLog(theXML); [connection release]; [webData release]; } (void)didReceiveMemoryWarning { // Releases the view if it doesn't have a superview. [super didReceiveMemoryWarning]; // Release any cached data, images, etc that aren't in use. } (void)viewDidUnload { // Release any retained subviews of the main view. // e.g. self.myOutlet = nil; } (void)dealloc { [Me_id release]; [Creation_id release]; [Mem_device_id release]; [Test_datetime release]; [PillBayNo release]; [soapResults release]; [super dealloc]; } @end I did all things as shown in the website and when i built application it successfully built but in the debuggin window i see (gdb) continue 2010-03-17 09:09:54.595 ConsumePillService[6546:20b] A00000004303101103/13/2010 07:34:38Hboxdata2 (gdb) continue (gdb) continue (gdb) continue 2010-03-17 09:10:05.411 ConsumePillService[6546:20b] DONE. Received Bytes: 476 2010-03-17 09:10:05.412 ConsumePillService[6546:20b] soap:ServerServer was unable to process request. ---> One or more errors occurred during processing of command. ORA-00936: missing expression It should return me true if all things are ok What is this ORA-00936 error all about as it is not releted with webservice Please help me solving this problem Thanks in advance, Vaibhav Deshpande

    Read the article

  • How Expedia Made My New Bride Cry

    - by Lance Robinson
    Tweet this? Email Expedia and ask them to give me and my new wife our honeymoon? When Expedia followed up their failure with our honeymoon trip with a complete and total lack of acknowledgement of any responsibility for the problem and endless loops of explaining the issue over and over again - I swore that they would make it right. When they brought my new bride to tears, I got an immediate and endless supply of motivation. I hope you will help me make them make it right by posting our story on Twitter, Facebook, your blog, on Expedia itself, and when talking to your friends in person about their own travel plans.   If you are considering using them now for an important trip - reconsider. Short summary: We arrived early for a flight - but Expedia had made a mistake with the data they supplied to JetBlue and Emirates, which resulted in us not being able to check in (one leg of our trip was missing)!  At the time of this post, three people (myself, my wife, and an exceptionally patient JetBlue employee named Mary) each spent hours on the phone with Expedia.  I myself spent right at 3 hours (according to iPhone records), Lauren spent an hour and a half or so, and poor Mary was probably on the phone for a good 3.5 hours.  This is after 5 hours total at the airport.  If you add up our phone time, that is nearly 8 hours of phone time over a 5 hour period with little or no help, stall tactics (?), run-around, denial, shifting of blame, and holding. Details below (times are approximate): First, my wife and I were married yesterday - June 18th, the 3 year anniversary of our first date. She is awesome. She is the nicest person I have ever known, a ton of fun, absolutely beautiful in every way. Ok enough mushy - here are the dirty details. 2:30 AM - Early Check-in Attempt - we attempted to check-in for our flight online. Some sort of technology error on website, instructed to checkin at desk. 4:30 AM - Arrive at airport. Try to check-in at kiosk, get the same error. We got to the JetBlue desk at RDU International Airport, where Mary helped us. Mary discovered that the Expedia provided itinerary does not match the Expedia provided tickets. We are informed that when that happens American, JetBlue, and others that use the same software cannot check you in for the flight because. Why? Because the itinerary was missing a leg of our flight! Basically we were not shown in the system as definitely being able to make it home. Mary called Expedia and was put on hold by their automated system. 4:55 AM - Mary, myself, and my brand new bride all waited for about 25 minutes when finally I decided I would make a call myself on my iPhone while Mary was on the airport phone. In their automated system, I chose "make a new reservation", thinking they might answer a little more quickly than "customer service". Not surprisingly I was connected to an Expedia person within 1 minute. They informed me that they would have to forward me to a customer service specialist. I explained to them that we were already on hold for that and had been for nearly half an hour, that we were going on our honeymoon and that our flight would be leaving soon - could they please help us. "Yes, I will help you". I hand the phone to JetBlue Mary who explains the situation 3 or 4 times. Obviously I couldn't hear both ends of the conversation at this point, but the Expedia person explained what the problem was by stating exactly what Mary had just spent 15 minutes explaining. Mary calmly confirms that this is the problem, and asks Expedia to re-issue the itinerary. Expedia tells Mary that they'll have to transfer her to customer service. Mary asks for someone specific so that we get an answer this time, and goes on hold. Mary get's connected, explains the situation, and then Mary's connection gets terminated. 5:10 AM - Mary calls back to the Expedia automated system again, and we wait for about 5 minutes on hold this time before I pick up my iPhone and call Expedia again myself. Again I go to sales, a person picks up the phone in less than a minute. I explain the situation and let them know that we are now very close to missing our flight for our honeymoon, could they please help us. "Yes, I will help you". Again I give the phone to Mary who provides them with a call back number in case we get disconnected again and explains the situation again. More back and forth with Expedia doing nothing but repeating the same questions, Mary answering the questions with the same information she provided in the original explanation, and Expedia simply restating the problem. Mary again asks them to re-issue the itinerary, and explains that doing so will fix the problem. Expedia again repeats the problem instead of fixing it, and Mary's connection gets terminated. 5:20 AM - Mary again calls back to Expedia. My beautiful bride also calls on her own phone. At this point she is struggling to hold back her tears, stumbling through an explanation of all that has happened and that we are about to miss our flight. Please help us. "Yes, I will help". My beautiful bride's connection gets terminated. Ok, maybe this disconnection isn't an accident. We've now been disconnected 3 times on two different phones. 5:45 AM - I walk away and pleadingly beg a person to help me. They "escalate" the issue to "Rosy" (sp?) at Expedia. I go through the whole song and dance again with Rosy, who gives me the same treatment Mary was given. Rosy blames JetBlue for now having the correct data. Meanwhile Mary is on the phone with Emirates Air (the airline for the second leg of our trip), who agrees with JetBlue that Expedia's data isn't up to date. We are informed by two airport employees that issues like this with Expedia are not uncommon, and that the fix is simple. On the phone iwth Rosy, I ask her to re-issue the itinerary because we are about to miss our flight. She again explains the problem to me. At this point, I am standing at the window, pleading with Rosy to help us get to our honeymoon, watching our airplane. Then our airplane leaves without us. 6:03 AM - At this point we have missed our flight. Re-issuing the itinerary is no longer a solution. I ask Rosy to start from the beginning and work us up a new trip. She says that she cannot do that. She says that she needs to talk to JetBlue and Emirates and find out why we cannot check-in for our flight. I remind Rosy that our flight has already left - I just watched it taxi away - it no longer matters why (not to mention the fact that we already knew why, and have known why since 4:30 AM), and have known the solution since 4:30 AM. Rosy, can you please book a new trip? Yes, but it will cost $400. Excuse me? Now you can, but it will cost ME to fix your mistake? Rosy says that she can escalate the situation to her supervisor but that will take 1.5 hours. 6:15 AM - I told Rosy that if they had re-issued the itinerary as JetBlue asked (at 4:30 AM), my new wife and I might be on the airplane now instead of dealing with this on the phone and missing the beginning (and how much more?) of our honeymoon. Rosy said that it was not necessary to re-issue the itinerary. Out of curiosity, i asked Rosy if there was some financial burden on them to re-issue the itinerary. "No", said Rosy. I asked her if it was a large time burden on Expedia to re-issue the itinerary. "No", said Rosy. I directly asked Rosy: Why wouldn't Expedia have re-issued the itinerary when JetBlue asked? No answer. I asked Rosy: If you had re-issued the itinerary at 4:30, isn't it possible that I would be on that flight right now? She actually surprised me by answering "Yes" to that question. So I pointed out that it followed that Expedia was responsible for the fact that we missed out flight, and she immediately went into more about how the problem was with JetBlue - but now it was ALSO an Emirates Air problem as well. I tell Rosy to go ahead and escalate the issue again, and please call me back in that 1.5 hours (which how is about 1 hour and 10 minutes away). 6:30 AM - I start tweeting my frustration with iPhone. It's now pretty much impossible for us to make it to The Maldives by 3pm, which is the time at which we would need to arrive in order to be allowed service to the actual island where we are staying. Expedia has now given me the run-around for 2 hours, caused me to miss my flight, and worst of all caused my amazing new wife Lauren to miss our honeymoon. You think I was mad? No. Furious. Its ok to make mistakes - but to refuse to fix them and to ruin our honeymoon? No, not ok, Expedia. I swore right then that Expedia would make this right. 7:45 AM - JetBlue mary is still talking her tail off to other people in JetBlue and Emirates Air. Mary works it out so that if Expedia simply books a new trip, JetBlue and Emirates will both waive all the fees. Now we just have to convince Expedia to fix their mistake and get us on our way! Around this time Expedia Rosy calls me back! I inform her of the excellent work of JetBlue Mary - that JetBlue and Emirates both will waive the fees so Expedia can fix their mistake and get us going on our way. She says that she sees documentation of this in her system and that she needs to put me on hold "for 1 to 10 minutes" to talk to Emirates Air (why I'm not exactly sure). I say ok. 8:45 AM - After an hour on hold, Rosy comes on the line and asks me to hold more. I ask her to call me back. 9:35 AM - I put down the iPhone Twitter app and picks up the laptop. You think I made some noise with my iPhone? Heh 11:25 AM - Expedia follows me and sends a canned "We're sorry, DM us the details".  If you look at their Twitter feed, 16 out of the most recent 20 tweets are exactly the same canned response.  The other 4?  Ads.  Um - #MultiFAIL? To Expedia:  You now have had (as explained above) 8 hours of 3 different people explaining our situation, you know the email address of our Expedia account, you know my web blog, you know my Twitter address, you know my phone number.  You also know how upset you have made both me and my new bride by treating us with such a ... non caring, scripted, uncooperative, argumentative, and possibly even deceitful manner.  In the wise words of the great Kenan Thompson of SNL: "FIX IT!".  And no, I'm NOT going away until you make this right. Period. 11:45 AM - Expedia corporate office called.  The woman I spoke to was very nice and apologetic.  She listened to me tell the story again, she says she understands the problem and she is going to work to resolve it.  I don't have any details on what exactly that resolution might me, she said she will call me back in 20 minutes.  She found out about the problem via Twitter.  Thank you Twitter, and all of you who helped.  Hopefully social media will win my wife and I our honeymoon, and hopefully Expedia will encourage their customer service teams treat their customers properly. 12:22 PM - Spoke to Fran again from Expedia corporate office.  She has a flight for us tonight.  She is booking it now.  We will arrive at our honeymoon destination of beautiful Veligandu Island Resort only 1 day late.  She cannot confirm today, but she expects that Expedia will pay for the lost honeymoon night.  Thank you everyone for your help.  I will reflect more on this whole situation and confirm its resolution after our flight is 100% confirmed.  For now, I'm going to take a breather and go kiss my wonderful wife! 1:50 PM - Have not yet received the promised phone call.  We did receive an email with a new itinerary for a flight but the booking is not for specific seats, so there is no guarantee that my wife and I will be able to sit together.  With the original booking I carefully selected our seats for every segment of our trip.  I decided to call into the phone number that Fran from the Expedia corporate office gave me.  Its automated voice system identified itself as "Tier 3 Support".  I am currently still on hold with them, I have not gotten through to a human yet. 1:55 PM - Fran from Expedia called me back.  She confirmed us as booked.  She called the airlines to confirm.  Unfortunately, Expedia was unwilling or unable to allow us any type of seat selection.  It is possible that i won't get to sit next to the woman I married less than a day ago on our 40 total hours of flight time (there and back).  In addition, our seats could be the worst seats on the planes, with no reclining seat back or right next to the restroom.  Despite this fact (which in my opinion is huge), the horrible inconvenience, the hours at the airport, and the negative Internet publicity that Expedia is receiving, Expedia declined to offer us any kind of upgrade or to mark us as SFU (suitable for upgrade).  Since they didn't offer - I asked, and was rejected.  I am grateful to finally be heading in the right direction, but not only did Expedia horribly botch this job from the very beginning, they followed that botch job with near zero customer service, followed by a verbally apologetic but otherwise half-hearted resolution.  If this works out favorably for us, great.  If not - I'm not done making noise, Expedia.  You owe us, and I expect you to make it right.  You haven't quite done that yet. Thanks - Thank you to Twitter.  Thanks to all those who sympathize with us and helped us get the attention of Expedia, since three people (one of them an airline employee) using Expedia's normal channels of communication for many hours didn't help.  Thanks especially to my PowerShell and Sharepoint friends, my local friends, and those connectors who encouraged me and spread my story. 5:15 PM - Love Wins - After all this, Lauren and I are exhausted.  We both took a short nap, and when we woke up we talked about the last 24 hours.  It was a big, amazing, story-filled 24 hours.  I said that Expedia won, but Lauren said no.  She pointed out how lucky we are.  We are in love and married.  We have wonderful family and friends.  We are both hard-working successful people who love what they do.  We get to go to an amazing exotic destination for our honeymoon like Veligandu in The Maldives...  That's a lot of good.  Expedia didn't win.  This was (is) a big loss for Expedia.  It is a public blemish for all to see.  But Lauren and I did win, big time.  Expedia may not have made things right - but things are right for us.  Post in progress... I will relay any further comments (or lack of) from Expedia soon, as well as an update on confirmation of their repayment of our lost resort room rates.  I'll also post a picture of us on our honeymoon as soon as I can!

    Read the article

  • WPF Blurry Images - Bitmap Class

    - by Luke
    I am using the following sample at http://blogs.msdn.com/dwayneneed/archive/2007/10/05/blurry-bitmaps.aspx within VB.NET. The code is shown below. I am having a problem when my application loads the CPU is pegging 50-70%. I have determined that the problem is with the Bitmap class. The OnLayoutUpdated() method is calling the InvalidateVisual() continously. This is because some points are not returning as equal but rather, Point(0.0,-0.5) Can anyone see any bugs within this code or know a better implmentation for pixel snapping a Bitmap image so it is not blurry? p.s. The sample code was in C#, however I believe that it was converted correctly. Imports System Imports System.Collections.Generic Imports System.Windows Imports System.Windows.Media Imports System.Windows.Media.Imaging Class Bitmap Inherits FrameworkElement ' Use FrameworkElement instead of UIElement so Data Binding works as expected Private _sourceDownloaded As EventHandler Private _sourceFailed As EventHandler(Of ExceptionEventArgs) Private _pixelOffset As Windows.Point Public Sub New() _sourceDownloaded = New EventHandler(AddressOf OnSourceDownloaded) _sourceFailed = New EventHandler(Of ExceptionEventArgs)(AddressOf OnSourceFailed) AddHandler LayoutUpdated, AddressOf OnLayoutUpdated End Sub Public Shared ReadOnly SourceProperty As DependencyProperty = DependencyProperty.Register("Source", GetType(BitmapSource), GetType(Bitmap), New FrameworkPropertyMetadata(Nothing, FrameworkPropertyMetadataOptions.AffectsRender Or FrameworkPropertyMetadataOptions.AffectsMeasure, New PropertyChangedCallback(AddressOf Bitmap.OnSourceChanged))) Public Property Source() As BitmapSource Get Return DirectCast(GetValue(SourceProperty), BitmapSource) End Get Set(ByVal value As BitmapSource) SetValue(SourceProperty, value) End Set End Property Public Shared Function FindParentWindow(ByVal child As DependencyObject) As Window Dim parent As DependencyObject = VisualTreeHelper.GetParent(child) 'Check if this is the end of the tree If parent Is Nothing Then Return Nothing End If Dim parentWindow As Window = TryCast(parent, Window) If parentWindow IsNot Nothing Then Return parentWindow Else ' Use recursion until it reaches a Window Return FindParentWindow(parent) End If End Function Public Event BitmapFailed As EventHandler(Of ExceptionEventArgs) ' Return our measure size to be the size needed to display the bitmap pixels. ' ' Use MeasureOverride instead of MeasureCore so Data Binding works as expected. ' Protected Overloads Overrides Function MeasureCore(ByVal availableSize As Size) As Size Protected Overloads Overrides Function MeasureOverride(ByVal availableSize As Size) As Size Dim measureSize As New Size() Dim bitmapSource As BitmapSource = Source If bitmapSource IsNot Nothing Then Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If Me.VisualParent IsNot Nothing Then Dim window As Window = window.GetWindow(Me.VisualParent) If window IsNot Nothing Then ps = PresentationSource.FromVisual(window.GetWindow(Me.VisualParent)) ElseIf FindParentWindow(Me) IsNot Nothing Then ps = PresentationSource.FromVisual(FindParentWindow(Me)) End If End If ' If ps IsNot Nothing Then Dim fromDevice As Matrix = ps.CompositionTarget.TransformFromDevice Dim pixelSize As New Vector(bitmapSource.PixelWidth, bitmapSource.PixelHeight) Dim measureSizeV As Vector = fromDevice.Transform(pixelSize) measureSize = New Size(measureSizeV.X, measureSizeV.Y) Else measureSize = New Size(bitmapSource.PixelWidth, bitmapSource.PixelHeight) End If End If Return measureSize End Function Protected Overloads Overrides Sub OnRender(ByVal dc As DrawingContext) Dim bitmapSource As BitmapSource = Me.Source If bitmapSource IsNot Nothing Then _pixelOffset = GetPixelOffset() ' Render the bitmap offset by the needed amount to align to pixels. dc.DrawImage(bitmapSource, New Rect(_pixelOffset, DesiredSize)) End If End Sub Private Shared Sub OnSourceChanged(ByVal d As DependencyObject, ByVal e As DependencyPropertyChangedEventArgs) Dim bitmap As Bitmap = DirectCast(d, Bitmap) Dim oldValue As BitmapSource = DirectCast(e.OldValue, BitmapSource) Dim newValue As BitmapSource = DirectCast(e.NewValue, BitmapSource) If ((oldValue IsNot Nothing) AndAlso (bitmap._sourceDownloaded IsNot Nothing)) AndAlso (Not oldValue.IsFrozen AndAlso (TypeOf oldValue Is BitmapSource)) Then RemoveHandler DirectCast(oldValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded RemoveHandler DirectCast(oldValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed -= bitmap._sourceFailed; // 3.5 End If If ((newValue IsNot Nothing) AndAlso (TypeOf newValue Is BitmapSource)) AndAlso Not newValue.IsFrozen Then AddHandler DirectCast(newValue, BitmapSource).DownloadCompleted, bitmap._sourceDownloaded AddHandler DirectCast(newValue, BitmapSource).DownloadFailed, bitmap._sourceFailed ' ((BitmapSource)newValue).DecodeFailed += bitmap._sourceFailed; // 3.5 End If End Sub Private Sub OnSourceDownloaded(ByVal sender As Object, ByVal e As EventArgs) InvalidateMeasure() InvalidateVisual() End Sub Private Sub OnSourceFailed(ByVal sender As Object, ByVal e As ExceptionEventArgs) Source = Nothing ' setting a local value seems scetchy... RaiseEvent BitmapFailed(Me, e) End Sub Private Sub OnLayoutUpdated(ByVal sender As Object, ByVal e As EventArgs) ' This event just means that layout happened somewhere. However, this is ' what we need since layout anywhere could affect our pixel positioning. Dim pixelOffset As Windows.Point = GetPixelOffset() If Not AreClose(pixelOffset, _pixelOffset) Then InvalidateVisual() End If End Sub ' Gets the matrix that will convert a Windows.Point from "above" the ' coordinate space of a visual into the the coordinate space ' "below" the visual. Private Function GetVisualTransform(ByVal v As Visual) As Matrix If v IsNot Nothing Then Dim m As Matrix = Matrix.Identity Dim transform As Transform = VisualTreeHelper.GetTransform(v) If transform IsNot Nothing Then Dim cm As Matrix = transform.Value m = Matrix.Multiply(m, cm) End If Dim offset As Vector = VisualTreeHelper.GetOffset(v) m.Translate(offset.X, offset.Y) Return m End If Return Matrix.Identity End Function Private Function TryApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean, ByVal throwOnError As Boolean, ByRef success As Boolean) As Windows.Point success = True If v IsNot Nothing Then Dim visualTransform As Matrix = GetVisualTransform(v) If inverse Then If Not throwOnError AndAlso Not visualTransform.HasInverse Then success = False Return New Windows.Point(0, 0) End If visualTransform.Invert() End If Point = visualTransform.Transform(Point) End If Return Point End Function Private Function ApplyVisualTransform(ByVal Point As Windows.Point, ByVal v As Visual, ByVal inverse As Boolean) As Windows.Point Dim success As Boolean = True Return TryApplyVisualTransform(Point, v, inverse, True, success) End Function Private Function GetPixelOffset() As Windows.Point Dim pixelOffset As New Windows.Point() Dim ps As PresentationSource = PresentationSource.FromVisual(Me) If ps IsNot Nothing Then Dim rootVisual As Visual = ps.RootVisual ' Transform (0,0) from this element up to pixels. pixelOffset = Me.TransformToAncestor(rootVisual).Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, False) pixelOffset = ps.CompositionTarget.TransformToDevice.Transform(pixelOffset) ' Round the origin to the nearest whole pixel. pixelOffset.X = Math.Round(pixelOffset.X) pixelOffset.Y = Math.Round(pixelOffset.Y) ' Transform the whole-pixel back to this element. pixelOffset = ps.CompositionTarget.TransformFromDevice.Transform(pixelOffset) pixelOffset = ApplyVisualTransform(pixelOffset, rootVisual, True) pixelOffset = rootVisual.TransformToDescendant(Me).Transform(pixelOffset) End If Return pixelOffset End Function Private Function AreClose(ByVal Point1 As Windows.Point, ByVal Point2 As Windows.Point) As Boolean Return AreClose(Point1.X, Point2.X) AndAlso AreClose(Point1.Y, Point2.Y) End Function Private Function AreClose(ByVal value1 As Double, ByVal value2 As Double) As Boolean If value1 = value2 Then Return True End If Dim delta As Double = value1 - value2 Return ((delta < 0.00000153) AndAlso (delta > -0.00000153)) End Function End Class

    Read the article

  • Ruby on Rail using MYSQL database

    - by Joseph Misiti
    Hey guys, New to rails, trying to figure out something simple. Seems as though I cannot migrate a very simple mysql database using "rake db:migrate" command. Here is the issue: I know rails defaults to sqllite right now, but I need to use mysql for a series of reasons. Use the following commands rails -d mysql MyMoviesSQL cd MyMoviesSQL script/generate scaffold Movies title:string rating:integer rake db:migrate never get past here because i see the following error: in /Users/user/websites/MyMovieSQL) rake aborted! NoMethodError: undefined method `ord' for 0:Fixnum: SET NAMES 'utf8' (See full trace by running task with --trace) using trace XXXXX-macbook-pro:MyMovieSQL user$ rake db:migrate --trace (in /Users/user/websites/MyMovieSQL) ** Invoke db:migrate (first_time) ** Invoke environment (first_time) ** Execute environment ** Execute db:migrate rake aborted! NoMethodError: undefined method ord' for 0:Fixnum: SET NAMES 'utf8' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract_adapter.rb:219:inlog' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:323:in execute' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:599:inconfigure_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:594:in connect' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:203:ininitialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:in new' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/mysql_adapter.rb:75:inmysql_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:in send' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:223:innew_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:245:in checkout_new_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:188:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:in loop' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:184:incheckout' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:in synchronize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:183:incheckout' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:98:in connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_pool.rb:326:inretrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:123:in retrieve_connection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/connection_adapters/abstract/connection_specification.rb:115:inconnection' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:435:in initialize' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:innew' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:400:in up' /Library/Ruby/Gems/1.8/gems/activerecord-2.3.5/lib/active_record/migration.rb:383:inmigrate' /Library/Ruby/Gems/1.8/gems/rails-2.3.5/lib/tasks/databases.rake:116 /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:in call' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:636:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:631:inexecute' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:597:in invoke_with_call_chain' /System/Library/Frameworks/Ruby.framework/Versions/1.8/usr/lib/ruby/1.8/monitor.rb:242:insynchronize' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:590:in invoke_with_call_chain' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:583:ininvoke' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2051:in invoke_task' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:in each' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2029:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:in standard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2023:intop_level' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2001:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:2068:instandard_exception_handling' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/lib/rake.rb:1998:in run' /Library/Ruby/Gems/1.8/gems/rake-0.8.7/bin/rake:31 /usr/bin/rake:19:inload' /usr/bin/rake:19 no clue what is going on, if they want me to add a patch because the methods does not exist, please tell me which file to add it to, and also, how in the future do i figure out which file I need to patch (I see it looks like its a method in FixNum class) here is a patch to a problem that looks similar, but its a different version of ruby http://www.mail-archive.com/[email protected]/msg00250.html versions rails 2.3.5 ruby 1.8.6 gem list yeilds: * LOCAL GEMS * actionmailer (2.3.5, 1.3.6) actionpack (2.3.5, 1.13.6) actionwebservice (1.2.6) activerecord (2.3.5, 1.15.6) activeresource (2.3.5) activesupport (2.3.5, 1.4.4) acts_as_ferret (0.4.1) capistrano (2.0.0) cgi_multipart_eof_fix (2.5.0) daemons (1.0.9) dbi (0.4.3) deprecated (2.0.1) dnssd (0.6.0) fastthread (1.0.1) fcgi (0.8.7) ferret (0.11.4) gem_plugin (0.2.3) highline (1.2.9) hpricot (0.6) libxml-ruby (0.9.5, 0.3.8.4) mongrel (1.1.4) needle (1.3.0) net-sftp (1.1.0) net-ssh (1.1.2) rack (1.0.1) rails (2.3.5) rake (0.8.7, 0.7.3) RedCloth (3.0.4) ruby-openid (1.1.4) ruby-yadis (0.3.4) rubygems-update (1.3.6) rubynode (0.1.3) sqlite3-ruby (1.2.1) termios (0.9.4) thanks in advanced

    Read the article

  • Import Data from Excel sheet to DB Table through OAF page

    - by PRajkumar
    1. Create a New Workspace and Project File > New > General > Workspace Configured for Oracle Applications File Name – PrajkumarImportxlsDemo   Automatically a new OA Project will also be created   Project Name -- ImportxlsDemo Default Package -- prajkumar.oracle.apps.fnd.importxlsdemo   2. Add JAR file jxl-2.6.3.jar to Apache Library Download jxl-2.6.3.jar from following link – http://www.findjar.com/jar/net.sourceforge.jexcelapi/jars/jxl-2.6.jar.html   Steps to add jxl.jar file in Local Machine Right Click on ImportxlsDemo > Project Properties > Libraries > Add jar/Directory and browse to directory where jxl-2.6.3.jar has been downloaded and select the JAR file            Steps to add jxl.jar file at EBS middle tier On your EBS middile tier copy jxl.jar at $FND_TOP/java/3rdparty/standalone Add $FND_TOP/java/3rdparty/standalone\jxl.jar to custom classpath in Jser.properties file which is at $IAS_ORACLE_HOME/Apache/Jserv/etc wrapper.classpath=/U01/oracle/dev/devappl/fnd/11.5.0/java/3rdparty/stdalone/jxl.jar Bounce Apache Server   3. Create a New Application Module (AM) Right Click on ImportxlsDemo > New > ADF Business Components > Application Module Name -- ImportxlsAM Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   Check Application Module Class: ImportxlsAMImpl Generate JavaFile(s)   4. Create Test Table in which we will insert data from excel CREATE TABLE xx_import_excel_data_demo (    -- --------------------      -- Data Columns      -- --------------------      column1                 VARCHAR2(100),      column2                 VARCHAR2(100),      column3                 VARCHAR2(100),      column4                 VARCHAR2(100),      column5                 VARCHAR2(100),      -- --------------------      -- Who Columns      -- --------------------      last_update_date   DATE         NOT NULL,      last_updated_by    NUMBER   NOT NULL,      creation_date         DATE         NOT NULL,      created_by             NUMBER    NOT NULL,      last_update_login  NUMBER );   5. Create a New Entity Object (EO) Right click on ImportxlsDemo > New > ADF Business Components > Entity Object Name – ImportxlsEO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.schema.server Database Objects -- XX_IMPORT_EXCEL_DATA_DEMO   Note – By default ROWID will be the primary key if we will not make any column to be primary key Check the Accessors, Create Method, Validation Method and Remove Method   6. Create a New View Object (VO) Right click on ImportxlsDemo > New > ADF Business Components > View Object Name -- ImportxlsVO Package -- prajkumar.oracle.apps.fnd.importxlsdemo.server   In Step2 in Entity Page select ImportxlsEO and shuttle it to selected list In Step3 in Attributes Window select all columns and shuttle them to selected list   In Java page Uncheck Generate Java file for View Object Class: ImportxlsVOImpl Select Generate Java File for View Row Class: ImportxlsVORowImpl -> Generate Java File -> Accessors   7. Add Your View Object to Root UI Application Module Right click on ImportxlsAM > Edit ImportxlsAM > Data Model > Select ImportxlsVO and shuttle to Data Model list   8. Create a New Page Right click on ImportxlsDemo > New > Web Tier > OA Components > Page Name -- ImportxlsPG Package -- prajkumar.oracle.apps.fnd.importxlsdemo.webui   9. Select the ImportxlsPG and go to the strcuture pane where a default region has been created   10. Select region1 and set the following properties:   Attribute Property ID PageLayoutRN AM Definition prajkumar.oracle.apps.fnd.importxlsdemo.server.ImportxlsAM Window Title Import Data From Excel through OAF Page Demo Window Title Import Data From Excel through OAF Page Demo   11. Create messageComponentLayout Region Under Page Layout Region Right click PageLayoutRN > New > Region   Attribute Property ID MainRN Item Style messageComponentLayout   12. Create a New Item messageFileUpload Bean under MainRN Right click on MainRN > New > messageFileUpload Set Following Properties for New Item --   Attribute Property ID MessageFileUpload Item Style messageFileUpload   13. Create a New Item Submit Button Bean under MainRN Right click on MainRN > New > messageLayout Set Following Properties for messageLayout --   Attribute Property ID ButtonLayout   Right Click on ButtonLayout > New > Item   Attribute Property ID Go Item Style submitButton Attribute Set /oracle/apps/fnd/attributesets/Buttons/Go   14. Create Controller for page ImportxlsPG Right Click on PageLayoutRN > Set New Controller Package Name: prajkumar.oracle.apps.fnd.importxlsdemo.webui Class Name: ImportxlsCO   Write Following Code in ImportxlsCO in processFormRequest import oracle.apps.fnd.framework.OAApplicationModule; import oracle.apps.fnd.framework.OAException; import java.io.Serializable; import oracle.apps.fnd.framework.webui.OAControllerImpl; import oracle.apps.fnd.framework.webui.OAPageContext; import oracle.apps.fnd.framework.webui.beans.OAWebBean; import oracle.cabo.ui.data.DataObject; import oracle.jbo.domain.BlobDomain; public void processFormRequest(OAPageContext pageContext, OAWebBean webBean) {  super.processFormRequest(pageContext, webBean);  if (pageContext.getParameter("Go") != null)  {   DataObject fileUploadData = (DataObject)pageContext.getNamedDataObject("MessageFileUpload");   String fileName = null;                 try   {    fileName = (String)fileUploadData.selectValue(null, "UPLOAD_FILE_NAME");   }   catch(NullPointerException ex)   {    throw new OAException("Please Select a File to Upload", OAException.ERROR);   }   BlobDomain uploadedByteStream = (BlobDomain)fileUploadData.selectValue(null, fileName);   try   {    OAApplicationModule oaapplicationmodule = pageContext.getRootApplicationModule();    Serializable aserializable2[] = {uploadedByteStream};    Class aclass2[] = {BlobDomain.class };    oaapplicationmodule.invokeMethod("ReadExcel", aserializable2,aclass2);   }   catch (Exception ex)   {    throw new OAException(ex.toString(), OAException.ERROR);   }  } }     Write Following Code in ImportxlsAMImpl.java import java.io.IOException; import java.io.InputStream; import jxl.Cell; import jxl.CellType; import jxl.Sheet; import jxl.Workbook; import jxl.read.biff.BiffException; import oracle.apps.fnd.framework.server.OAApplicationModuleImpl; import oracle.jbo.Row; import oracle.apps.fnd.framework.OAViewObject; import oracle.apps.fnd.framework.server.OAViewObjectImpl; import oracle.jbo.domain.BlobDomain; public void createRecord(String[] excel_data) {   OAViewObject vo = (OAViewObject)getImportxlsVO1();            if (!vo.isPreparedForExecution())    {   vo.executeQuery();      }                      Row row = vo.createRow();  try  {   for (int i=0; i < excel_data.length; i++)   {    row.setAttribute("Column" +(i+1) ,excel_data[i]);   }  }  catch(Exception e)  {   System.out.println(e.getMessage());   }  vo.insertRow(row);  getTransaction().commit(); }      public void ReadExcel(BlobDomain fileData) throws IOException {  String[] excel_data  = new String[5];  InputStream inputWorkbook = fileData.getInputStream();  Workbook w;          try  {   w = Workbook.getWorkbook(inputWorkbook);                       // Get the first sheet   Sheet sheet = w.getSheet(0);                       for (int i = 0; i < sheet.getRows(); i++)   {    for (int j = 0; j < sheet.getColumns(); j++)    {     Cell cell = sheet.getCell(j, i);     CellType type = cell.getType();     if (cell.getType() == CellType.LABEL)     {      System.out.println("I got a label " + cell.getContents());      excel_data[j] = cell.getContents();     }     if (cell.getType() == CellType.NUMBER)     {        System.out.println("I got a number " + cell.getContents());      excel_data[j] = cell.getContents();     }    }    createRecord(excel_data);   }  }              catch (BiffException e)  {   e.printStackTrace();  } }   15. Congratulation you have successfully finished. Run Your page and Test Your Work   Consider Excel PRAJ_TEST.xls with following data --       Lets Try to import this data into DB Table --          

    Read the article

  • Announcing the release of the Windows Azure SDK 2.1 for .NET

    - by ScottGu
    Today we released the v2.1 update of the Windows Azure SDK for .NET.  This is a major refresh of the Windows Azure SDK and it includes some great new features and enhancements. These new capabilities include: Visual Studio 2013 Preview Support: The Windows Azure SDK now supports using the new VS 2013 Preview Visual Studio 2013 VM Image: Windows Azure now has a built-in VM image that you can use to host and develop with VS 2013 in the cloud Visual Studio Server Explorer Enhancements: Redesigned with improved filtering and auto-loading of subscription resources Virtual Machines: Start and Stop VM’s w/suspend billing directly from within Visual Studio Cloud Services: New Emulator Express option with reduced footprint and Run as Normal User support Service Bus: New high availability options, Notification Hub support, Improved VS tooling PowerShell Automation: Lots of new PowerShell commands for automating Web Sites, Cloud Services, VMs and more All of these SDK enhancements are now available to start using immediately and you can download the SDK from the Windows Azure .NET Developer Center.  Visual Studio’s Team Foundation Service (http://tfs.visualstudio.com/) has also been updated to support today’s SDK 2.1 release, and the SDK 2.1 features can now be used with it (including with automated builds + tests). Below are more details on the new features and capabilities released today: Visual Studio 2013 Preview Support Today’s Window Azure SDK 2.1 release adds support for the recent Visual Studio 2013 Preview. The 2.1 SDK also works with Visual Studio 2010 and Visual Studio 2012, and works side by side with the previous Windows Azure SDK 1.8 and 2.0 releases. To install the Windows Azure SDK 2.1 on your local computer, choose the “install the sdk” link from the Windows Azure .NET Developer Center. Then, chose which version of Visual Studio you want to use it with.  Clicking the third link will install the SDK with the latest VS 2013 Preview: If you don’t already have the Visual Studio 2013 Preview installed on your machine, this will also install Visual Studio Express 2013 Preview for Web. Visual Studio 2013 VM Image Hosted in the Cloud One of the requests we’ve heard from several customers has been to have the ability to host Visual Studio within the cloud (avoiding the need to install anything locally on your computer). With today’s SDK update we’ve added a new VM image to the Windows Azure VM Gallery that has Visual Studio Ultimate 2013 Preview, SharePoint 2013, SQL Server 2012 Express and the Windows Azure 2.1 SDK already installed on it.  This provides a really easy way to create a development environment in the cloud with the latest tools. With the recent shutdown and suspend billing feature we shipped on Windows Azure last month, you can spin up the image only when you want to do active development, and then shut down the virtual machine and not have to worry about usage charges while the virtual machine is not in use. You can create your own VS image in the cloud by using the New->Compute->Virtual Machine->From Gallery menu within the Windows Azure Management Portal, and then by selecting the “Visual Studio Ultimate 2013 Preview” template: Visual Studio Server Explorer: Improved Filtering/Management of Subscription Resources With the Windows Azure SDK 2.1 release you’ll notice significant improvements in the Visual Studio Server Explorer. The explorer has been redesigned so that all Windows Azure services are now contained under a single Windows Azure node.  From the top level node you can now manage your Windows Azure credentials, import a subscription file or filter Server Explorer to only show services from particular subscriptions or regions. Note: The Web Sites and Mobile Services nodes will appear outside the Windows Azure Node until the final release of VS 2013. If you have installed the ASP.NET and Web Tools Preview Refresh, though, the Web Sites node will appear inside the Windows Azure node even with the VS 2013 Preview. Once your subscription information is added, Windows Azure services from all your subscriptions are automatically enumerated in the Server Explorer. You no longer need to manually add services to Server Explorer individually. This provides a convenient way of viewing all of your cloud services, storage accounts, service bus namespaces, virtual machines, and web sites from one location: Subscription and Region Filtering Support Using the Windows Azure node in Server Explorer, you can also now filter your Windows Azure services in the Server Explorer by the subscription or region they are in.  If you have multiple subscriptions but need to focus your attention to just a few subscription for some period of time, this a handy way to hide the services from other subscriptions view until they become relevant. You can do the same sort of filtering by region. To enable this, just select “Filter Services” from the context menu on the Windows Azure node: Then choose the subscriptions and/or regions you want to filter by. In the below example, I’ve decided to show services from my pay-as-you-go subscription within the East US region: Visual Studio will then automatically filter the items that show up in the Server Explorer appropriately: With storage accounts and service bus namespaces, you sometimes need to work with services outside your subscription. To accommodate that scenario, those services allow you to attach an external account (from the context menu). You’ll notice that external accounts have a slightly different icon in server explorer to indicate they are from outside your subscription. Other Improvements We’ve also improved the Server Explorer by adding additional properties and actions to the service exposed. You now have access to most of the properties on a cloud service, deployment slot, role or role instance as well as the properties on storage accounts, virtual machines and web sites. Just select the object of interest in Server Explorer and view the properties in the property pane. We also now have full support for creating/deleting/update storage tables, blobs and queues from directly within Server Explorer.  Simply right-click on the appropriate storage account node and you can create them directly within Visual Studio: Virtual Machines: Start/Stop within Visual Studio Virtual Machines now have context menu actions that allow you start, shutdown, restart and delete a Virtual Machine directly within the Visual Studio Server Explorer. The shutdown action enables you to shut down the virtual machine and suspend billing when the VM is not is use, and easily restart it when you need it: This is especially useful in Dev/Test scenarios where you can start a VM – such as a SQL Server – during your development session and then shut it down / suspend billing when you are not developing (and no longer be billed for it). You can also now directly remote desktop into VMs using the “Connect using Remote Desktop” context menu command in VS Server Explorer.  Cloud Services: Emulator Express with Run as Normal User Support You can now launch Visual Studio and run your cloud services locally as a Normal User (without having to elevate to an administrator account) using a new Emulator Express option included as a preview feature with this SDK release.  Emulator Express is a version of the Windows Azure Compute Emulator that runs a restricted mode – one instance per role – and it doesn’t require administrative permissions and uses 40% less resources than the full Windows Azure Emulator. Emulator Express supports both web and worker roles. To run your application locally using the Emulator Express option, simply change the following settings in the Windows Azure project. On the shortcut menu for the Windows Azure project, choose Properties, and then choose the Web tab. Check the setting for IIS (Internet Information Services). Make sure that the option is set to IIS Express, not the full version of IIS. Emulator Express is not compatible with full IIS. On the Web tab, choose the option for Emulator Express. Service Bus: Notification Hubs With the Windows Azure SDK 2.1 release we are adding support for Windows Azure Notification Hubs as part of our official Windows Azure SDK, inside of Microsoft.ServiceBus.dll (previously the Notification Hub functionality was in a preview assembly). You are now able to create, update and delete Notification Hubs programmatically, manage your device registrations, and send push notifications to all your mobile clients across all platforms (Windows Store, Windows Phone 8, iOS, and Android). Learn more about Notification Hubs on MSDN here, or watch the Notification Hubs //BUILD/ presentation here. Service Bus: Paired Namespaces One of the new features included with today’s Windows Azure SDK 2.1 release is support for Service Bus “Paired Namespaces”.  Paired Namespaces enable you to better handle situations where a Service Bus service namespace becomes unavailable (for example: due to connectivity issues or an outage) and you are unable to send or receive messages to the namespace hosting the queue, topic, or subscription. Previously,to handle this scenario you had to manually setup separate namespaces that can act as a backup, then implement manual failover and retry logic which was sometimes tricky to get right. Service Bus now supports Paired Namespaces, which enables you to connect two namespaces together. When you activate the secondary namespace, messages are stored in the secondary queue for delivery to the primary queue at a later time. If the primary container (namespace) becomes unavailable for some reason, automatic failover enables the messages in the secondary queue. For detailed information about paired namespaces and high availability, see the new topic Asynchronous Messaging Patterns and High Availability. Service Bus: Tooling Improvements In this release, the Windows Azure Tools for Visual Studio contain several enhancements and changes to the management of Service Bus messaging entities using Visual Studio’s Server Explorer. The most noticeable change is that the Service Bus node is now integrated into the Windows Azure node, and supports integrated subscription management. Additionally, there has been a change to the code generated by the Windows Azure Worker Role with Service Bus Queue project template. This code now uses an event-driven “message pump” programming model using the QueueClient.OnMessage method. PowerShell: Tons of New Automation Commands Since my last blog post on the previous Windows Azure SDK 2.0 release, we’ve updated Windows Azure PowerShell (which is a separate download) five times. You can find the full change log here. We’ve added new cmdlets in the following areas: China instance and Windows Azure Pack support Environment Configuration VMs Cloud Services Web Sites Storage SQL Azure Service Bus China Instance and Windows Azure Pack We now support the following cmdlets for the China instance and Windows Azure Pack, respectively: China Instance: Web Sites, Service Bus, Storage, Cloud Service, VMs, Network Windows Azure Pack: Web Sites, Service Bus We will have full cmdlet support for these two Windows Azure environments in PowerShell in the near future. Virtual Machines: Stop/Start Virtual Machines Similar to the Start/Stop VM capability in VS Server Explorer, you can now stop your VM and suspend billing: If you want to keep the original behavior of keeping your stopped VM provisioned, you can pass in the -StayProvisioned switch parameter. Virtual Machines: VM endpoint ACLs We’ve added and updated a bunch of cmdlets for you to configure fine-grained network ACL on your VM endpoints. You can use the following cmdlets to create ACL config and apply them to a VM endpoint: New-AzureAclConfig Get-AzureAclConfig Set-AzureAclConfig Remove-AzureAclConfig Add-AzureEndpoint -ACL Set-AzureEndpoint –ACL The following example shows how to add an ACL rule to an existing endpoint of a VM. Other improvements for Virtual Machine management includes Added -NoWinRMEndpoint parameter to New-AzureQuickVM and Add-AzureProvisioningConfig to disable Windows Remote Management Added -DirectServerReturn parameter to Add-AzureEndpoint and Set-AzureEndpoint to enable/disable direct server return Added Set-AzureLoadBalancedEndpoint cmdlet to modify load balanced endpoints Cloud Services: Remote Desktop and Diagnostics Remote Desktop and Diagnostics are popular debugging options for Cloud Services. We’ve introduced cmdlets to help you configure these two Cloud Service extensions from Windows Azure PowerShell. Windows Azure Cloud Services Remote Desktop extension: New-AzureServiceRemoteDesktopExtensionConfig Get-AzureServiceRemoteDesktopExtension Set-AzureServiceRemoteDesktopExtension Remove-AzureServiceRemoteDesktopExtension Windows Azure Cloud Services Diagnostics extension New-AzureServiceDiagnosticsExtensionConfig Get-AzureServiceDiagnosticsExtension Set-AzureServiceDiagnosticsExtension Remove-AzureServiceDiagnosticsExtension The following example shows how to enable Remote Desktop for a Cloud Service. Web Sites: Diagnostics With our last SDK update, we introduced the Get-AzureWebsiteLog –Tail cmdlet to get the log streaming of your Web Sites. Recently, we’ve also added cmdlets to configure Web Site application diagnostics: Enable-AzureWebsiteApplicationDiagnostic Disable-AzureWebsiteApplicationDiagnostic The following 2 examples show how to enable application diagnostics to the file system and a Windows Azure Storage Table: SQL Database Previously, you had to know the SQL Database server admin username and password if you want to manage the database in that SQL Database server. Recently, we’ve made the experience much easier by not requiring the admin credential if the database server is in your subscription. So you can simply specify the -ServerName parameter to tell Windows Azure PowerShell which server you want to use for the following cmdlets. Get-AzureSqlDatabase New-AzureSqlDatabase Remove-AzureSqlDatabase Set-AzureSqlDatabase We’ve also added a -AllowAllAzureServices parameter to New-AzureSqlDatabaseServerFirewallRule so that you can easily add a firewall rule to whitelist all Windows Azure IP addresses. Besides the above experience improvements, we’ve also added cmdlets get the database server quota and set the database service objective. Check out the following cmdlets for details. Get-AzureSqlDatabaseServerQuota Get-AzureSqlDatabaseServiceObjective Set-AzureSqlDatabase –ServiceObjective Storage and Service Bus Other new cmdlets include Storage: CRUD cmdlets for Azure Tables and Queues Service Bus: Cmdlets for managing authorization rules on your Service Bus Namespace, Queue, Topic, Relay and NotificationHub Summary Today’s release includes a bunch of great features that enable you to build even better cloud solutions.  All the above features/enhancements are shipped and available to use immediately as part of the 2.1 release of the Windows Azure SDK for .NET. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

< Previous Page | 769 770 771 772 773 774 775 776 777 778 779 780  | Next Page >