Search Results

Search found 72218 results on 2889 pages for 'multiple definition error'.

Page 810/2889 | < Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >

  • Migrate Data and Schema from MySQL to SQL Server

    - by colithium
    Are there any free solutions for automatically migrating a database from MySQL to SQL Server Server that "just works"? I've been attempting this simple (at least I thought so) task all day now. I've tried: SQL Server Management Studio's Import Data feature Create an empty database Tasks - Import Data... .NET Framework Data Provider for Odbc Valid DSN (verified it connects) Copy data from one or more tables or views Check 1 VERY simple table Click Preview Get Error: The preview data could not be retrieved. ADDITIONAL INFORMATION: ERROR [42000] [MySQL][ODBC 5.1 Driver][mysqld-5.1.45-community]You have an error in your SQL syntax; check the manual that corresponds to your MySQL server version for the right syntax to use near '"table_name"' at line 1 (myodbc5.dll) A similar error occurs if I go through the rest of the wizard and perform the operation. The failed step is "Setting Source Connection" the error refers to retrieving column information and then lists the above error. It can retrieve column information just fine when I modify column mappings so I really don't know what the issue is. I've also tried getting various MySql tools to output ddl statements that SQL Server understand but haven't succeeded. I've tried with MySQL v5.1.11 to SQL Server 2005 and with MySQL v5.1.45 to SQL Server 2008 (with ODBC drivers 3.51.27.00 and 5.01.06.00 respectively)

    Read the article

  • UIAlertView Will not show

    - by John
    I have a program that is requesting a JSON string. I have created a class that contains the connect method below. When the root view is coming up, it does a request to this class and method to load up some data for the root view. When I test the error code (by changing the URL host to 127.0.0.1), I expect the Alert to show. Behavior is that the root view just goes black, and the app aborts with no alert. No errors in debug mode on the console, either. Any thoughts as to this behavior? I've been looking around for hints to this for hours to no avail. Thanks in advance for your help. Note: the conditional for (error) is called, as well as the UIAlertView code. - (NSString *)connect:(NSString *)urlString { NSString *jsonString; UIApplication *app = [UIApplication sharedApplication]; app.networkActivityIndicatorVisible = YES; NSError *error = nil; NSURLResponse *response = nil; NSURL *url = [[NSURL alloc] initWithString:urlString]; NSURLRequest *req = [NSURLRequest requestWithURL:url cachePolicy:NSURLRequestReloadIgnoringCacheData timeoutInterval:10]; NSData *_response = [NSURLConnection sendSynchronousRequest: req returningResponse: &response error: &error]; if (error) { /* inform the user that the connection failed */ //AlertWithMessage(@"Connection Failed!", message); UIAlertView *alert = [[UIAlertView alloc] initWithTitle:@"Oopsie!" message:@"Unable to connect! Try later, thanks." delegate:nil cancelButtonTitle:@"OK" otherButtonTitles: nil]; [alert show]; [alert release]; } else { jsonString = [[[NSString alloc] initWithData:_response encoding:NSUTF8StringEncoding] autorelease]; } app.networkActivityIndicatorVisible = NO; [url release]; return jsonString; }

    Read the article

  • Ant: simplest way to copy over a relative list of files

    - by Derek Illchuk
    Using Ant, I want to copy a list of files from one project to another, where each project has the same directory structure. Is there a way to get the following to work? <project name="WordSlug" default="pull" basedir="."> <description> WordSlug: pull needed files </description> <property name="prontiso_home" location="../../prontiso/trunk"/> <!-- I know this doesn't work, what's the missing piece? --> <target name="pull" description="Pull needed files"> <copy todir="." overwrite="true"> <resources> <file file="${prontiso_home}/application/views/scripts/error/error.phtml"/> <file file="${prontiso_home}/application/controllers/CacheController.php"/> <!-- etc. --> </resources> </copy> </target> </project> Success is deriving the paths automatically: ${prontiso_home}/application/views/scripts/error/error.phtml copied to ./application/views/scripts/error/error.phtml ${prontiso_home}/application/controllers/CacheController.php copied to ./application/controllers/CacheController.php Thanks!

    Read the article

  • Encode_JSON Errors in Lasso 8.6.2 After Period of Time

    - by ATP_JD
    We are in the process of converting apps from Lasso 8 to Lasso 9, and as an intermediate step, have upgraded from 8.5.5 to 8.6.2 (which runs alongside 9 on our new box, in different virtual hosts). I am finding that with 8.6.2 we are getting a slew of errors on pages that call encode_json. The weird thing with these errors is that they don't start happening until some period of time after the site starts. Then, some hours later, all encode_json calls begin to fail with error messages like this: An error occurred while processing your request. Error Information Error Message: No tag, type or constant was defined under the name "?????????????????" with arguments: array: (pair: (-find)=([\x{0020}-\x{21}\x{23}-\x{5b}\x{5d}-\x{10fff}])), (r) at: onCompare with params: 'r' at: JSON with params: 'reload', -Options=array: (-Internal) at: JSON with params: @map: (reload)=(false), (tcstring)=(LZU), (timestring)=(10:42 AM&nbsp;&nbsp;&nbsp;1442Z) at: [...].lasso with params: 'pageloadtime'='1383038310' on line: 31 at position: 1 Error Code: -9948 (Yes, those Chinese(?) characters are in the error message.) I have removed the 8.5.5 encode_json tag from LassoStartup, so we are using the correct built-in method. The encode_json method fails for any and all parameters I throw at it from simple strings to arrays of maps. Upon restarting the site, encode_json resumes working for an hour or two, seemingly depending on load. On 8.5.5, we don't have this problem. Does anyone have experience with this issue? Any advice regarding trying the 8.5.5 tag swap encode_json to see if I can override the built-in method? Maybe it will work better? Thanks in advance for your time and assistance. -Justin

    Read the article

  • ASP.NET 2.0 in Virtual Trying to Use SQL State Server

    - by user251660
    We have IIS 6 running on a W2003 Server. The root web site is running a v1.1 site. Under this site we have a virtual running a v2.0 site (with a separate application pool). The web.config for the root site is using SQL as its state server and has a 1.1 SQL state server database installed. The 2.0 virtual web.config does not need state and its web.config has no reference to a state server. When we attempt to call the virtual we receive this error message. "Unable to use SQL Server because ASP.NET version 2.0 Session State is not installed on the SQL server. Please install ASP.NET Session State SQL Server version 2.0 or above. This issue is currently only occurring on one web server. The rest are able to run the 2.0 virtual application. I also notice that if we call the 2.0 virtual with the IP address it does not generate the error, however if we call it with the host header name it generates the error (this behavior is only on the 1 web server with the error, all the others can be called with either the ip or host header without error). As an additional note the root and virtual are running with SSL. My theory is that the virtual 2.0 application is inheriting the 1.1 web.config state server entry from the root and when it looks at the state server it sees it as a 1.1 version and reports the error that it needs a 2.0 state server. I however cannot understand why the other servers are not behaving in this matter. All of the servers are on the same OS service pack as well as the same version of .net framework. Any ideas? Thanks

    Read the article

  • GUnload is null or undefined using Directions Service

    - by user1677756
    I'm getting an error using Google Maps API V3 that I don't understand. My initial map displays just fine, but when I try to get directions, I get the following two errors: Error: The value of the property 'GUnload' is null or undefined, not a Function object Error: Unable to get value of the property 'setDirections': object is null or undefined I'm not using GUnload anywhere, so I don't understand why I'm getting that error. As far as the second error is concerned, it's as if something is wrong with the Directions service. Here is my code: var directionsDisplay; var directionsService = new google.maps.DirectionsService(); var map; function initialize(address) { directionsDisplay = new google.maps.DirectionsRenderer(); var geocoder = new google.maps.Geocoder(); var latlng = new google.maps.LatLng(42.733963, -84.565501); var mapOptions = { center: latlng, zoom: 15, mapTypeId: google.maps.MapTypeId.ROADMAP }; map = new google.maps.Map(document.getElementById("map_canvas"), mapOptions); geocoder.geocode({ 'address': address }, function (results, status) { if (status == google.maps.GeocoderStatus.OK) { map.setCenter(results[0].geometry.location); var marker = new google.maps.Marker({ map: map, position: results[0].geometry.location }); } else { alert("Geocode was not successful for the following reason: " + status); } }); directionsDisplay.setMap(map); } function getDirections(start, end) { var request = { origin:start, destination:end, travelMode: google.maps.TravelMode.DRIVING }; directionsService.route(request, function(result, status) { if (status == google.maps.DirectionsStatus.OK) { directionsDisplay.setDirections(result); } else { alert("Directions cannot be displayed for the following reason: " + status); } }); } I'm not very savvy with javascript, so I could have made some sort of error there. I appreciate any help I can get.

    Read the article

  • C++ NetUserAdd() not working?

    - by Brett Powell
    I posted earlier about how to do this, and got some great replies, and have managed to get the code written based off the MSDN example. However, it does not seem to be working properly. Its printing out the ERROR_ACCESS_DENIED message, but im not sure why as I am running it as a full admin. I was initially trying to create a USER_PRIV_ADMIN, but the MSDN said it can only use USER_PRIV_USER, but sadly neither work. Im hoping someone can spot a mistake or has an idea. Thanks! void AddRDPUser() { USER_INFO_1 ui; DWORD dwLevel = 1; DWORD dwError = 0; NET_API_STATUS nStatus; ui.usri1_name = L"DummyUserAccount"; ui.usri1_password = L"a2cDz3rQpG8"; //ignored by NetUserAdd //ui.usri1_password_age = -1; ui.usri1_priv = USER_PRIV_USER; //USER_PRIV_ADMIN; ui.usri1_home_dir = NULL; ui.usri1_comment = NULL; ui.usri1_flags = UF_SCRIPT; ui.usri1_script_path = NULL; nStatus = NetUserAdd(NULL, dwLevel, (LPBYTE)&ui, &dwError); switch (nStatus) { case NERR_Success: { Msg("SUCCESS!\n"); break; } case NERR_InvalidComputer: { fprintf(stderr, "A system error has occurred: NERR_InvalidComputer\n"); break; } case NERR_NotPrimary: { fprintf(stderr, "A system error has occurred: NERR_NotPrimary\n"); break; } case NERR_GroupExists: { fprintf(stderr, "A system error has occurred: NERR_GroupExists\n"); break; } case NERR_UserExists: { fprintf(stderr, "A system error has occurred: NERR_UserExists\n"); break; } case NERR_PasswordTooShort: { fprintf(stderr, "A system error has occurred: NERR_PasswordTooShort\n"); break; } case ERROR_ACCESS_DENIED: { fprintf(stderr, "A system error has occurred: ERROR_ACCESS_DENIED\n"); break; } } }

    Read the article

  • How to save errors with the handleError in CakePHP 2.0?

    - by jags1988cr
    I have a big problem. Well, I'm trying to save the errors that happen in my web application. For example (only a example), if I make a division by zero, or happens an error in a query or something like that. So, I wanna catch that and save it in the database. But the handleError of CakePHP is static, so I can't use $this-MyModel-saveError(...). In my /app/Lib/AppError.php I have this: class AppError extends ErrorHandler { public $uses = array('Errors.Error'); public static function handleError($code, $description, $file = null, $line = null, $context = null) { //This print the error: echo "Code: ".$code." Description: ".$description." File: ".$file." Line: ".$line."<br>"; //I want to do this, (save to the database): $this->Error->saveError($code, $description, $file, $line); } } Without the $this-Error-saveError($code, $description, $file, $line); it works but I not only want to show the error. I think need an example or something like that. Please help me. Regards and thank you. Thanks... P.D.: Sorry for the English, I'm an english student...

    Read the article

  • Load ascx via jQuery

    - by Raika
    Is there a way to load ascx file by jQuery? UPDATE: thanks to @Emmett and @Yads.i am use the handler and the ajax: jQuery.ajax({ type: "POST", //GET url: "Foo.ashx", data: '{}', contentType: "application/json; charset=utf-8", dataType: "json", success: function (response) { jQuery('#controlload').append(response.d); // or response }, error: function () { jQuery('#controlload').append('error'); } }); but i,m just getting an error. my ajax is wrong? Another Update : i am using error: function (xhr, ajaxOptions, thrownError) { jQuery('#controlload').append(thrownError); } and this what i get : Invalid JSON: Test =(this test is label inside my ascx) and my ascx file after Error!!! Another Update : my ascx file is somthing like this: <asp:DropDownList ID="ddl" runat="server" AutoPostBack="true"> <asp:ListItem>1</asp:ListItem> <asp:ListItem>2</asp:ListItem> </asp:DropDownList> <asp:Label ID="Label1" runat="server">Test</asp:Label> but when I call ajax i get this error in asp: :( Control 'ctl00_ddl' of type 'DropDownList' must be placed inside a form tag with runat=server.

    Read the article

  • How to wait for ajax validation to complete before submitting a form?

    - by Jung
    Having a problem where the form submits before the validateUsername function has a chance to complete the username check on the server-side. How do I submit the form only after the validateUsername function completes? Hope this is clear... form.submit(function(){ if (validateUsername() & validateEmail() & validatePassword()) { return true; } else { return false; } }); function validateUsername(){ usernameInfo.addClass("sign_up_drill"); usernameInfo.text("checking..."); var b = username.val(); var filter = /^[a-zA-Z0-9_]+$/; $.post("../username_check.php",{su_username:username.val()},function(data) { if (data=='yes') { username.addClass("error"); usernameInfo.text("sorry, that one's taken"); usernameInfo.addClass("error"); return false; } else if (!filter.test(b)) { username.addClass("error"); usernameInfo.text("no funny characters please"); usernameInfo.addClass("error"); return false; } else { username.removeClass("error"); usernameInfo.text("ok"); usernameInfo.removeClass("error"); return true; } }); }

    Read the article

  • How to repair Java in Ubuntu after trying to switch to Java 6 using update-java-alternatives

    - by Kau-Boy
    I tried to switch from Java 5 to Java 6 using the "update-java-alternatives" command like explained on this page: https://help.ubuntu.com/community/Java But afterwards I get the following error when I tried to execute java: root@webserver:~# java Error occurred during initialization of VM Could not reserve enough space for object heap Could not create the Java virtual machine. I also tried to reinstall the java binaries using "apt-get" but I didn't succeed reinstalling it. I would like to post the "apt-get" errors, but unfortunately I don't know how to print out the error messages in English and not in German. My system is a Ubuntu 8.04 ROOT server. Here is the (Google translated) english text tring to install Java 6 again: root@server:~# apt-get install sun-java6-jdk Reading package lists ... Ready Dependency tree Reading state information ... Ready sun-java6-jdk is already the newest version. sun-java6-jdk set to manually installed. 0 upgraded, 0 newly installed, 0 to remove and 86 not upgraded. 1 not fully installed or removed. After this operation, 0B of additional disk space will be used. Set up a sun-java6-bin (6-03-0ubuntu2) ... Could not create the Java virtual machine. dpkg: error processing sun-java6-bin (- configure): Subprocess post-installation script returned error exit status 1 Errors were encountered while processing: sun-java6-bin E: Sub-process / usr / bin / dpkg returned an error code (1) I hope that this might help you helping me.

    Read the article

  • why i cant read my webservice with jquery???

    - by rima
    what's my function problem?I wana read from my webservice but I just receive error :( the browser message is: "undefined- status:error" when I press button just I see the error function of my Jqueary calling but I dont know why??plz help me. function SetupCompanyInfo(companyID) { //alert('aaa'); companyID = '1'; $.ajax({ type: "POST", url: '../../../Services/CompanyServices.asmx/GetCompanyInfo', data: "{}", contentType: "application/json; charset=utf-8", dataType: "json", success: OnSuccess, error: OnError }); } function OnSuccess(data, status) { SetMainBody(data); } function OnError(request, status, error) { SetMainBody(error + '- ' + request + ' status:' + status); } my webservice: using System; using System.Web.Services; using System.Web.Script.Services; /// <summary> /// Summary description for CompanyServices /// </summary> [WebService(Namespace = "http://tempuri.org/")] [WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)] [ScriptService] //[System.ComponentModel.ToolboxItem(false)] public class CompanyServices : System.Web.Services.WebService { [WebMethod] public string GetCompanyInfo() { string response = "aaa"; Console.WriteLine("here"); return response.ToString(); } [WebMethod] public string GetCompanyInfo(string id) { string response = "aaa"; Console.WriteLine("here2"+id); return response.ToString(); } } my aspx file,part of head and my button code: <script src="../../Scripts/InnerFunctions.js" type="text/javascript"></script> <script src="../../Scripts/jquery-1.3.2.min.js" type="text/javascript"></script> <script src="../../Scripts/TabMenu.js" type="text/javascript"></script> <script src="Scripts/InternalFunctions.js" type="text/javascript"></script> <div dir="rtl" style="border: 1px solid #CCCCCC"> <asp:Image ID="Image1" runat="server" ImageUrl="../../generalImg/Icons/64X64/settings_Icon_64.gif" style="width: 27px; height: 26px" onclick="SetupCompanyInfo(1)" /></div>

    Read the article

  • Perl Capture and Modify STDERR before it prints to a file

    - by MicrobicTiger
    I have a perl script which performs multiple external commands and prints the outputs from STDERR and STDOUT to a logfile along with a series of my own print statements to act as documentation on the process. My problem is that the STDERR repeats ~identical prints as example below. I'd like to capture this before it prints and replace with the final result for each of the commands i run. blocks evaluated : 0 blocks evaluated : 10000 blocks evaluated : 20000 blocks evaluated : 30000 ... blocks evaluated : 3420000 blocks evaluated : 3428776 Here's how I'm getting STDOUT and STDERR my $logfile = "Logfile.log"; #log file name #--- Open log file for append if specified --- if ( $logfile ) { open ( OLDOUT, ">&", STDOUT ) or die "ERROR: Can't backup STDOUT location.\n"; close STDOUT; open ( STDOUT, ">", $logfile ) or die "ERROR: Logfile [$logfile] cannot be opened.\n"; } if ( $logfile ) { open ( OLDERR, ">&", STDERR ) or die "ERROR: Can't backup STDERR location.\n"; close STDERR; open ( STDERR, '>&STDOUT' ) or die "ERROR: failed to pass STDERR to STDOUT.\n"; } and closing them close STDERR; open ( STDERR, ">&", OLDERR ) or die "ERROR: Can't fix that first thing you broke!\n"; close STDOUT; open ( STDOUT, ">&", OLDOUT ) or die "ERROR: Can't fix that other thing you broke!\n"; How do I access the STDERR when each print is occurring to do the replace? Or prevent it from printing if it isn't the last of the batch. Many Thanks in advance.

    Read the article

  • Foiled by path-dependent types

    - by Ladlestein
    I'm having trouble using, in one trait, a Parser returned from a method in another trait. The compiler complains of a type mismatch and it appears to me that the problem is due to the path-dependent class. I'm not sure how to get what I want. trait Outerparser extends RegexParsers { def inner: Innerparser def quoted[T](something: Parser[T]) = "\"" ~> something <~ "\"" def quotedNumber = quoted(inner.number) // Compile error def quotedLocalNumber = quoted(number) // Compiles just fine def number: Parser[Int] = ("""[1-9][0-9]*"""r) ^^ {str => str.toInt} } trait Innerparser extends RegexParsers { def number: Parser[Int] = ("""[1-9][0-9]*"""r) ^^ {str => str.toInt} } And the error: [error] /Path/to/MyParser.scala:6: type mismatch [error] found : minerals.Innerparser#Parser[Int] [error] required: Outerparser.this.Parser[?] [error] def quotedNumber = quoted(inner.number) I sort-of get the idea: each "something" method is defining a Parser type whose path is specific to the enclosing class (Outerparser or Innerparser). The "quoted" method of Outerparser expects an an instance of type Outerparser.this.Parser but is getting Innerparser#Parser. I like to be able to use quoted with a parser obtained from this class or some other class. How can I do that?

    Read the article

  • PHP & form validation

    - by user887515
    I have a form that I'm submitting via ajax, and I want to return a message of the list of fields that were empty. I've got that all done and dusted, but it just seems really long winded on the PHP side of things. How can I do the below in a less convoluted way? <?php if(empty($_POST["emailaddress"])){ $error = 'true'; $validation_msg = 'Country missing.'; if(empty($error_msg)){ $error_msg .= $validation_msg; } else{ $error_msg .= '\n' . $validation_msg; } } if(empty($_POST["password"])){ $error = 'true'; $validation_msg = 'Country missing.'; if(empty($error_msg)){ $error_msg .= $validation_msg; } else{ $error_msg .= '\n' . $validation_msg; } } if(empty($_POST["firstname"])){ $error = 'true'; $validation_msg = 'First name missing.'; if(empty($error_msg)){ $error_msg .= $validation_msg; } else{ $error_msg .= '\n' . $validation_msg; } } if(empty($_POST["lastname"])){ $error = 'true'; $validation_msg = 'Last name missing.'; if(empty($error_msg)){ $error_msg .= $validation_msg; } else{ $error_msg .= '\n' . $validation_msg; } } if($error){ header('HTTP/1.1 500 Internal Server Error'); header('Content-Type: application/json'); die($error_msg); } ?>

    Read the article

  • Function returning pointer to struct

    - by GammaGuy
    I am having some issues with code that is returning a pointer to a struct declared inside a class. Here is my code so far: SortedList.h #ifndef SORTEDLIST_H #define SORTEDLIST_H class SortedList{ public: SortedList(); ... private: struct Listnode { Student *student; Listnode *next; }; static Listnode *copyList (Listnode *L); }; #endif SortedList.cpp #include "SortedList.h" ... // Here is where the problem lies Listnode SortedList::*copyList(Listnode *L) { return 0; // for NULL } Apparently, the copy list method wont compile. I am using Microsoft Visual Studio and the compiler tells me that "Listnode" is unidentified. When I try to compile, here is whhat I get: 1>------ Build started: Project: Program3, Configuration: Debug Win32 ------ 1> SortedList.cpp 1>c:\users\owner\documents\college\fall 2012\cs 368 - learn c++\program3\program3\sortedlist.cpp(159): error C2657: 'SortedList::*' found at the start of a statement (did you forget to specify a type?) 1>c:\users\owner\documents\college\fall 2012\cs 368 - learn c++\program3\program3\sortedlist.cpp(159): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int 1>c:\users\owner\documents\college\fall 2012\cs 368 - learn c++\program3\program3\sortedlist.cpp(159): error C2065: 'L' : undeclared identifier 1>c:\users\owner\documents\college\fall 2012\cs 368 - learn c++\program3\program3\sortedlist.cpp(159): error C4430: missing type specifier - int assumed. Note: C++ does not support default-int 1>c:\users\owner\documents\college\fall 2012\cs 368 - learn c++\program3\program3\sortedlist.cpp(159): fatal error C1903: unable to recover from previous error(s); stopping compilation ========== Build: 0 succeeded, 1 failed, 0 up-to-date, 0 skipped ========== Help would be greatly appreciated...ASAP

    Read the article

  • What access token should i use?

    - by user548458
    I made a posting scores. It worked normaly sometimes. But, now I cant post any score. If I use user accessToken: string path = meID + "/scores"; var parameters = new Dictionary<string, object> (){ { "score", score.ToString () }, { "access_token", accessToken } }; facebook.post (path, parameters, ( error, obj ) => I get error: {"error":{"message":"(#15) This method must be called with an app access_token.", "type":"OAuthException", "code":15}} If I use an app access token: string path = meID + "/scores"; var parameters = new Dictionary<string, object> () { { "score", score.ToString () }, { "access_token", appAccessToken } }; facebook.post (path, parameters, ( error, obj ) => I get other error: {"error":{"message":"A user access token is required to request this resource.", "type":"OAuthException", "code":102}} Help me please, what am i doing wrong? PS: I worked well recently, but now - dont. Cant explain it...

    Read the article

  • Force language change from custom account creation script through url

    - by jax
    I have made a custom 'Account Creation' script so that users can login from my phone application. What I want is to be able to change the responses from the server depending on their locale. So when I request a page I would add lang=en or lang=zh etc. This works http://mysite.com/phone/my_custom_account_creation.php?lang=en Response: <resource classification="error" code="Error (Code: 500)"> <message>Please enter your name:</message> </resource> This does not work: http://mysite.com/phone/my_custom_account_creation.php?lang=zh Response: <resource classification="error" code="Error (Code: 500)"> <message>Please enter your name:</message> </resource> If I go into Joomla at set the default language to chinese, it works. <resource classification="error" code="Error (Code: 500)"> <message>????????</message> </resource> but http://mysite.com/phone/my_custom_account_creation.php?lang=en does not work, instead it continues to show the chinese version. What might I be able to do here?

    Read the article

  • How do I ignore an "invalid" SSL certificate in Objective-C?

    - by ipwnstuff
    Currently I have: NSArray* array = [NSArray arrayWithObjects:@"auth.login",@"username",@"password", nil]; NSData* packed_array = [array messagePack]; NSURL* url = [NSURL URLWithString:@"https://192.168.1.149:3790/api/1.0"]; NSMutableURLRequest* request = [NSMutableURLRequest requestWithURL:url]; [request setHTTPMethod:@"POST"]; [request setValue:@"RPC Server" forHTTPHeaderField:@"Host"]; [request setValue:@"binary/message-pack" forHTTPHeaderField:@"Content-Type"]; [request setValue:[NSString stringWithFormat:@"%d",[packed_array length]] forHTTPHeaderField:@"Content-Length"]; [request setHTTPBody:packed_array]; NSURLResponse *response; NSError *error; responseData = [NSMutableData dataWithData:[NSURLConnection sendSynchronousRequest:request returningResponse:&response error:&error]]; NSLog(@"response data: %@",[responseData messagePackParse]); NSLog(@"error: %@",error); - (BOOL)connection:(NSURLConnection *)connection canAuthenticateAgainstProtectionSpace:(NSURLProtectionSpace *)protectionSpace { NSLog(@"called canAuthenticateAgainstProtectionSpace"); return [protectionSpace.authenticationMethod isEqualToString:NSURLAuthenticationMethodServerTrust]; } - (void)connection:(NSURLConnection *)connection didReceiveAuthenticationChallenge:(NSURLAuthenticationChallenge *)challenge { NSLog(@"called didReceiveAuthenticationChallenge"); [challenge.sender useCredential:[NSURLCredential credentialForTrust:challenge.protectionSpace.serverTrust] forAuthenticationChallenge:challenge]; } Which returns "Error Domain=NSURLErrorDomain Code=-1202 "The certificate for this server is invalid"…" How should I be implementing the answer from this question?

    Read the article

  • Inheritance not working

    - by Pendo826
    Hey im just practicing inheritance and i encountered a problem. Im getting an error in my car class(sub-class) that the variables in Vehicle(parent) are not visible. i didnt do anything to change this and i dont even know how to make it invisible. Can anyone help me with this. public class Vehicle { private String make, model, colour; private int registrationNumber; public Vehicle() { this.make = ""; this.model = ""; this.colour = ""; this.registrationNumber = 0; } public Vehicle(String make, String model, String colour, int registrationNumber) { this.make = make; this.model = model; this.colour = colour; this.registrationNumber = registrationNumber; } public String getMake() { return make; } public void setMake(String make) { this.make = make; } public String getModel() { return model; } public void setModel(String model) { this.model = model; } public String getColour() { return colour; } public void setColour(String colour) { this.colour = colour; } public int getRegistrationNumber() { return registrationNumber; } public void setRegistrationNumber(int registrationNumber) { this.registrationNumber = registrationNumber; } public String toString() { return "Vehicle [make=" + make + ", model=" + model + ", colour=" + colour + ", registrationNumber=" + registrationNumber + "]"; } } public Car() { super(); this.doors = 0; this.shape = ""; } public Car(int doors, String shape, String make, String model, String colour, int registrationNumber) { super(); this.doors = doors; this.shape = shape; this.make = make;//Error this.model = model;//Error this.colour = colour;//Error this.registrationNumber = registrationNumber;//Error } The error message: Description Resource Path Location Type The field Vehicle.make is not visible Car.java /VehicleApp/src line 19 Java Problem The field Vehicle.model is not visible Car.java /VehicleApp/src line 20 Java Problem The field Vehicle.colour is not visible Car.java /VehicleApp/src line 21 Java Problem The field Vehicle.registrationNumber is not visible Car.java /VehicleApp/src line 22 Java Problem

    Read the article

  • JQM Nested Popups

    - by mcottingham
    I am having a hard time with the new Alpha release of JQM not showing nested popups. For example, I am displaying a popup form that the user is supposed to fill out, if the server side validation fails, I want to display an error popup. I am finding that the error popup is not being shown. I suspect that it is being shown below the original popup. function bindSongWriterInvite() { // Bind the click event of the invite button $("#invite-songwriter").click(function () { $("#songwriter-invite-popup").popup("open"); }); // Bind the invitation click event of the invite modal $("#songwriter-invite-invite").click(function () { if ($('#songwriter-invite').valid()) { $('#songwriter-invite-popup').popup('close'); $.ajax({ type: 'POST', async: true, url: '/songwriter/jsoncreate', data: $('#songwriter-invite').serialize() + "&songName=" + $("#Song_Title").val(), success: function (response) { if (response.state != "success") { alert("Should be showing error dialog"); mobileBindErrorDialog(response.message); } else { mobileBindErrorDialog("Success!"); } }, failure: function () { mobileBindErrorDialog("Failure!"); }, dataType: 'json' }); } }); // Bind the cancel click event of the invite modal $("#songwriter-invite-cancel").click(function () { $("#songwriter-invite-popup").popup("close"); }); } function mobileBindErrorDialog(errorMessage) { // Close all open popups. This is a work around as the JQM Alpha does not // open new popups in front of all other popups. var error = $("<div></div>").append("<p>" + errorMessage + "</p>") .popup(); $(error).popup("open"); } So, you can see that I attempt to show the error dialog regardless of whether the ajax post succeeds or fails. It just never shows up. Anyone have any thoughts? Mike

    Read the article

  • Using jQuery to POST Form Data to an ASP.NET ASMX AJAX Web Service

    - by Rick Strahl
    The other day I got a question about how to call an ASP.NET ASMX Web Service or PageMethods with the POST data from a Web Form (or any HTML form for that matter). The idea is that you should be able to call an endpoint URL, send it regular urlencoded POST data and then use Request.Form[] to retrieve the posted data as needed. My first reaction was that you can’t do it, because ASP.NET ASMX AJAX services (as well as Page Methods and WCF REST AJAX Services) require that the content POSTed to the server is posted as JSON and sent with an application/json or application/x-javascript content type. IOW, you can’t directly call an ASP.NET AJAX service with regular urlencoded data. Note that there are other ways to accomplish this. You can use ASP.NET MVC and a custom route, an HTTP Handler or separate ASPX page, or even a WCF REST service that’s configured to use non-JSON inputs. However if you want to use an ASP.NET AJAX service (or Page Methods) with a little bit of setup work it’s actually quite easy to capture all the form variables on the client and ship them up to the server. The basic steps needed to make this happen are: Capture form variables into an array on the client with jQuery’s .serializeArray() function Use $.ajax() or my ServiceProxy class to make an AJAX call to the server to send this array On the server create a custom type that matches the .serializeArray() name/value structure Create extension methods on NameValue[] to easily extract form variables Create a [WebMethod] that accepts this name/value type as an array (NameValue[]) This seems like a lot of work but realize that steps 3 and 4 are a one time setup step that can be reused in your entire site or multiple applications. Let’s look at a short example that looks like this as a base form of fields to ship to the server: The HTML for this form looks something like this: <div id="divMessage" class="errordisplay" style="display: none"> </div> <div> <div class="label">Name:</div> <div><asp:TextBox runat="server" ID="txtName" /></div> </div> <div> <div class="label">Company:</div> <div><asp:TextBox runat="server" ID="txtCompany"/></div> </div> <div> <div class="label" ></div> <div> <asp:DropDownList runat="server" ID="lstAttending"> <asp:ListItem Text="Attending" Value="Attending"/> <asp:ListItem Text="Not Attending" Value="NotAttending" /> <asp:ListItem Text="Maybe Attending" Value="MaybeAttending" /> <asp:ListItem Text="Not Sure Yet" Value="NotSureYet" /> </asp:DropDownList> </div> </div> <div> <div class="label">Special Needs:<br /> <small>(check all that apply)</small></div> <div> <asp:ListBox runat="server" ID="lstSpecialNeeds" SelectionMode="Multiple"> <asp:ListItem Text="Vegitarian" Value="Vegitarian" /> <asp:ListItem Text="Vegan" Value="Vegan" /> <asp:ListItem Text="Kosher" Value="Kosher" /> <asp:ListItem Text="Special Access" Value="SpecialAccess" /> <asp:ListItem Text="No Binder" Value="NoBinder" /> </asp:ListBox> </div> </div> <div> <div class="label"></div> <div> <asp:CheckBox ID="chkAdditionalGuests" Text="Additional Guests" runat="server" /> </div> </div> <hr /> <input type="button" id="btnSubmit" value="Send Registration" /> The form includes a few different kinds of form fields including a multi-selection listbox to demonstrate retrieving multiple values. Setting up the Server Side [WebMethod] The [WebMethod] on the server we’re going to call is going to be very simple and just capture the content of these values and echo then back as a formatted HTML string. Obviously this is overly simplistic but it serves to demonstrate the simple point of capturing the POST data on the server in an AJAX callback. public class PageMethodsService : System.Web.Services.WebService { [WebMethod] public string SendRegistration(NameValue[] formVars) { StringBuilder sb = new StringBuilder(); sb.AppendFormat("Thank you {0}, <br/><br/>", HttpUtility.HtmlEncode(formVars.Form("txtName"))); sb.AppendLine("You've entered the following: <hr/>"); foreach (NameValue nv in formVars) { // strip out ASP.NET form vars like _ViewState/_EventValidation if (!nv.name.StartsWith("__")) { if (nv.name.StartsWith("txt") || nv.name.StartsWith("lst") || nv.name.StartsWith("chk")) sb.Append(nv.name.Substring(3)); else sb.Append(nv.name); sb.AppendLine(": " + HttpUtility.HtmlEncode(nv.value) + "<br/>"); } } sb.AppendLine("<hr/>"); string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs == null) sb.AppendLine("No Special Needs"); else { sb.AppendLine("Special Needs: <br/>"); foreach (string need in needs) { sb.AppendLine("&nbsp;&nbsp;" + need + "<br/>"); } } return sb.ToString(); } } The key feature of this method is that it receives a custom type called NameValue[] which is an array of NameValue objects that map the structure that the jQuery .serializeArray() function generates. There are two custom types involved in this: The actual NameValue type and a NameValueExtensions class that defines a couple of extension methods for the NameValue[] array type to allow for single (.Form()) and multiple (.FormMultiple()) value retrieval by name. The NameValue class is as simple as this and simply maps the structure of the array elements of .serializeArray(): public class NameValue { public string name { get; set; } public string value { get; set; } } The extension method class defines the .Form() and .FormMultiple() methods to allow easy retrieval of form variables from the returned array: /// <summary> /// Simple NameValue class that maps name and value /// properties that can be used with jQuery's /// $.serializeArray() function and JSON requests /// </summary> public static class NameValueExtensionMethods { /// <summary> /// Retrieves a single form variable from the list of /// form variables stored /// </summary> /// <param name="formVars"></param> /// <param name="name">formvar to retrieve</param> /// <returns>value or string.Empty if not found</returns> public static string Form(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).FirstOrDefault(); if (matches != null) return matches.value; return string.Empty; } /// <summary> /// Retrieves multiple selection form variables from the list of /// form variables stored. /// </summary> /// <param name="formVars"></param> /// <param name="name">The name of the form var to retrieve</param> /// <returns>values as string[] or null if no match is found</returns> public static string[] FormMultiple(this NameValue[] formVars, string name) { var matches = formVars.Where(nv => nv.name.ToLower() == name.ToLower()).Select(nv => nv.value).ToArray(); if (matches.Length == 0) return null; return matches; } } Using these extension methods it’s easy to retrieve individual values from the array: string name = formVars.Form("txtName"); or multiple values: string[] needs = formVars.FormMultiple("lstSpecialNeeds"); if (needs != null) { // do something with matches } Using these functions in the SendRegistration method it’s easy to retrieve a few form variables directly (txtName and the multiple selections of lstSpecialNeeds) or to iterate over the whole list of values. Of course this is an overly simple example – in typical app you’d probably want to validate the input data and save it to the database and then return some sort of confirmation or possibly an updated data list back to the client. Since this is a full AJAX service callback realize that you don’t have to return simple string values – you can return any of the supported result types (which are most serializable types) including complex hierarchical objects and arrays that make sense to your client code. POSTing Form Variables from the Client to the AJAX Service To call the AJAX service method on the client is straight forward and requires only use of little native jQuery plus JSON serialization functionality. To start add jQuery and the json2.js library to your page: <script src="Scripts/jquery.min.js" type="text/javascript"></script> <script src="Scripts/json2.js" type="text/javascript"></script> json2.js can be found here (be sure to remove the first line from the file): http://www.json.org/json2.js It’s required to handle JSON serialization for those browsers that don’t support it natively. With those script references in the document let’s hookup the button click handler and call the service: $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); $.ajax({ url: "PageMethodsService.asmx/SendRegistration", type: "POST", contentType: "application/json", data: JSON.stringify({ formVars: arForm }), dataType: "json", success: function (result) { var jEl = $("#divMessage"); jEl.html(result.d).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, error: function (xhr, status) { alert("An error occurred: " + status); } }); } The key feature in this code is the $("#form1").serializeArray();  call which serializes all the form fields of form1 into an array. Each form var is represented as an object with a name/value property. This array is then serialized into JSON with: JSON.stringify({ formVars: arForm }) The format for the parameter list in AJAX service calls is an object with one property for each parameter of the method. In this case its a single parameter called formVars and we’re assigning the array of form variables to it. The URL to call on the server is the name of the Service (or ASPX Page for Page Methods) plus the name of the method to call. On return the success callback receives the result from the AJAX callback which in this case is the formatted string which is simply assigned to an element in the form and displayed. Remember the result type is whatever the method returns – it doesn’t have to be a string. Note that ASP.NET AJAX and WCF REST return JSON data as a wrapped object so the result has a ‘d’ property that holds the actual response: jEl.html(result.d).fadeIn(1000); Slightly simpler: Using ServiceProxy.js If you want things slightly cleaner you can use the ServiceProxy.js class I’ve mentioned here before. The ServiceProxy class handles a few things for calling ASP.NET and WCF services more cleanly: Automatic JSON encoding Automatic fix up of ‘d’ wrapper property Automatic Date conversion on the client Simplified error handling Reusable and abstracted To add the service proxy add: <script src="Scripts/ServiceProxy.js" type="text/javascript"></script> and then change the code to this slightly simpler version: <script type="text/javascript"> proxy = new ServiceProxy("PageMethodsService.asmx/"); $(document).ready(function () { $("#btnSubmit").click(sendRegistration); }); function sendRegistration() { var arForm = $("#form1").serializeArray(); proxy.invoke("SendRegistration", { formVars: arForm }, function (result) { var jEl = $("#divMessage"); jEl.html(result).fadeIn(1000); setTimeout(function () { jEl.fadeOut(1000) }, 5000); }, function (error) { alert(error.message); } ); } The code is not very different but it makes the call as simple as specifying the method to call, the parameters to pass and the actions to take on success and error. No more remembering which content type and data types to use and manually serializing to JSON. This code also removes the “d” property processing in the response and provides more consistent error handling in that the call always returns an error object regardless of a server error or a communication error unlike the native $.ajax() call. Either approach works and both are pretty easy. The ServiceProxy really pays off if you use lots of service calls and especially if you need to deal with date values returned from the server  on the client. Summary Making Web Service calls and getting POST data to the server is not always the best option – ASP.NET and WCF AJAX services are meant to work with data in objects. However, in some situations it’s simply easier to POST all the captured form data to the server instead of mapping all properties from the input fields to some sort of message object first. For this approach the above POST mechanism is useful as it puts the parsing of the data on the server and leaves the client code lean and mean. It’s even easy to build a custom model binder on the server that can map the array values to properties on an object generically with some relatively simple Reflection code and without having to manually map form vars to properties and do string conversions. Keep in mind though that other approaches also abound. ASP.NET MVC makes it pretty easy to create custom routes to data and the built in model binder makes it very easy to deal with inbound form POST data in its original urlencoded format. The West Wind West Wind Web Toolkit also includes functionality for AJAX callbacks using plain POST values. All that’s needed is a Method parameter to query/form value to specify the method to be called on the server. After that the content type is completely optional and up to the consumer. It’d be nice if the ASP.NET AJAX Service and WCF AJAX Services weren’t so tightly bound to the content type so that you could more easily create open access service endpoints that can take advantage of urlencoded data that is everywhere in existing pages. It would make it much easier to create basic REST endpoints without complicated service configuration. Ah one can dream! In the meantime I hope this article has given you some ideas on how you can transfer POST data from the client to the server using JSON – it might be useful in other scenarios beyond ASP.NET AJAX services as well. Additional Resources ServiceProxy.js A small JavaScript library that wraps $.ajax() to call ASP.NET AJAX and WCF AJAX Services. Includes date parsing extensions to the JSON object, a global dataFilter for processing dates on all jQuery JSON requests, provides cleanup for the .NET wrapped message format and handles errors in a consistent fashion. Making jQuery Calls to WCF/ASMX with a ServiceProxy Client More information on calling ASMX and WCF AJAX services with jQuery and some more background on ServiceProxy.js. Note the implementation has slightly changed since the article was written. ww.jquery.js The West Wind West Wind Web Toolkit also includes ServiceProxy.js in the West Wind jQuery extension library. This version is slightly different and includes embedded json encoding/decoding based on json2.js.© Rick Strahl, West Wind Technologies, 2005-2010Posted in jQuery  ASP.NET  AJAX  

    Read the article

  • Windows Azure: Backup Services Release, Hyper-V Recovery Manager, VM Enhancements, Enhanced Enterprise Management Support

    - by ScottGu
    This morning we released a huge set of updates to Windows Azure.  These new capabilities include: Backup Services: General Availability of Windows Azure Backup Services Hyper-V Recovery Manager: Public preview of Windows Azure Hyper-V Recovery Manager Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Configuration Active Directory: Securely manage hundreds of SaaS applications Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure SDK 2.2: A massive update of our SDK + Visual Studio tooling support All of these improvements are now available to use immediately.  Below are more details about them. Backup Service: General Availability Release of Windows Azure Backup Today we are releasing Windows Azure Backup Service as a general availability service.  This release is now live in production, backed by an enterprise SLA, supported by Microsoft Support, and is ready to use for production scenarios. Windows Azure Backup is a cloud based backup solution for Windows Server which allows files and folders to be backed up and recovered from the cloud, and provides off-site protection against data loss. The service provides IT administrators and developers with the option to back up and protect critical data in an easily recoverable way from any location with no upfront hardware cost. Windows Azure Backup is built on the Windows Azure platform and uses Windows Azure blob storage for storing customer data. Windows Server uses the downloadable Windows Azure Backup Agent to transfer file and folder data securely and efficiently to the Windows Azure Backup Service. Along with providing cloud backup for Windows Server, Windows Azure Backup Service also provides capability to backup data from System Center Data Protection Manager and Windows Server Essentials, to the cloud. All data is encrypted onsite before it is sent to the cloud, and customers retain and manage the encryption key (meaning the data is stored entirely secured and can’t be decrypted by anyone but yourself). Getting Started To get started with the Windows Azure Backup Service, create a new Backup Vault within the Windows Azure Management Portal.  Click New->Data Services->Recovery Services->Backup Vault to do this: Once the backup vault is created you’ll be presented with a simple tutorial that will help guide you on how to register your Windows Servers with it: Once the servers you want to backup are registered, you can use the appropriate local management interface (such as the Microsoft Management Console snap-in, System Center Data Protection Manager Console, or Windows Server Essentials Dashboard) to configure the scheduled backups and to optionally initiate recoveries. You can follow these tutorials to learn more about how to do this: Tutorial: Schedule Backups Using the Windows Azure Backup Agent This tutorial helps you with setting up a backup schedule for your registered Windows Servers. Additionally, it also explains how to use Windows PowerShell cmdlets to set up a custom backup schedule. Tutorial: Recover Files and Folders Using the Windows Azure Backup Agent This tutorial helps you with recovering data from a backup. Additionally, it also explains how to use Windows PowerShell cmdlets to do the same tasks. Below are some of the key benefits the Windows Azure Backup Service provides: Simple configuration and management. Windows Azure Backup Service integrates with the familiar Windows Server Backup utility in Windows Server, the Data Protection Manager component in System Center and Windows Server Essentials, in order to provide a seamless backup and recovery experience to a local disk, or to the cloud. Block level incremental backups. The Windows Azure Backup Agent performs incremental backups by tracking file and block level changes and only transferring the changed blocks, hence reducing the storage and bandwidth utilization. Different point-in-time versions of the backups use storage efficiently by only storing the changes blocks between these versions. Data compression, encryption and throttling. The Windows Azure Backup Agent ensures that data is compressed and encrypted on the server before being sent to the Windows Azure Backup Service over the network. As a result, the Windows Azure Backup Service only stores encrypted data in the cloud storage. The encryption key is not available to the Windows Azure Backup Service, and as a result the data is never decrypted in the service. Also, users can setup throttling and configure how the Windows Azure Backup service utilizes the network bandwidth when backing up or restoring information. Data integrity is verified in the cloud. In addition to the secure backups, the backed up data is also automatically checked for integrity once the backup is done. As a result, any corruptions which may arise due to data transfer can be easily identified and are fixed automatically. Configurable retention policies for storing data in the cloud. The Windows Azure Backup Service accepts and implements retention policies to recycle backups that exceed the desired retention range, thereby meeting business policies and managing backup costs. Hyper-V Recovery Manager: Now Available in Public Preview I’m excited to also announce the public preview of a new Windows Azure Service – the Windows Azure Hyper-V Recovery Manager (HRM). Windows Azure Hyper-V Recovery Manager helps protect your business critical services by coordinating the replication and recovery of System Center Virtual Machine Manager 2012 SP1 and System Center Virtual Machine Manager 2012 R2 private clouds at a secondary location. With automated protection, asynchronous ongoing replication, and orderly recovery, the Hyper-V Recovery Manager service can help you implement Disaster Recovery and restore important services accurately, consistently, and with minimal downtime. Application data in an Hyper-V Recovery Manager scenarios always travels on your on-premise replication channel. Only metadata (such as names of logical clouds, virtual machines, networks etc.) that is needed for orchestration is sent to Azure. All traffic sent to/from Azure is encrypted. You can begin using Windows Azure Hyper-V Recovery today by clicking New->Data Services->Recovery Services->Hyper-V Recovery Manager within the Windows Azure Management Portal.  You can read more about Windows Azure Hyper-V Recovery Manager in Brad Anderson’s 9-part series, Transform the datacenter. To learn more about setting up Hyper-V Recovery Manager follow our detailed step-by-step guide. Virtual Machines: Delete Attached Disks, Availability Set Warnings, SQL AlwaysOn Today’s Windows Azure release includes a number of nice updates to Windows Azure Virtual Machines.  These improvements include: Ability to Delete both VM Instances + Attached Disks in One Operation Prior to today’s release, when you deleted VMs within Windows Azure we would delete the VM instance – but not delete the drives attached to the VM.  You had to manually delete these yourself from the storage account.  With today’s update we’ve added a convenience option that now allows you to either retain or delete the attached disks when you delete the VM:   We’ve also added the ability to delete a cloud service, its deployments, and its role instances with a single action. This can either be a cloud service that has production and staging deployments with web and worker roles, or a cloud service that contains virtual machines.  To do this, simply select the Cloud Service within the Windows Azure Management Portal and click the “Delete” button: Warnings on Availability Sets with Only One Virtual Machine In Them One of the nice features that Windows Azure Virtual Machines supports is the concept of “Availability Sets”.  An “availability set” allows you to define a tier/role (e.g. webfrontends, databaseservers, etc) that you can map Virtual Machines into – and when you do this Windows Azure separates them across fault domains and ensures that at least one of them is always available during servicing operations.  This enables you to deploy applications in a high availability way. One issue we’ve seen some customers run into is where they define an availability set, but then forget to map more than one VM into it (which defeats the purpose of having an availability set).  With today’s release we now display a warning in the Windows Azure Management Portal if you have only one virtual machine deployed in an availability set to help highlight this: You can learn more about configuring the availability of your virtual machines here. Configuring SQL Server Always On SQL Server Always On is a great feature that you can use with Windows Azure to enable high availability and DR scenarios with SQL Server. Today’s Windows Azure release makes it even easier to configure SQL Server Always On by enabling “Direct Server Return” endpoints to be configured and managed within the Windows Azure Management Portal.  Previously, setting this up required using PowerShell to complete the endpoint configuration.  Starting today you can enable this simply by checking the “Direct Server Return” checkbox: You can learn more about how to use direct server return for SQL Server AlwaysOn availability groups here. Active Directory: Application Access Enhancements This summer we released our initial preview of our Application Access Enhancements for Windows Azure Active Directory.  This service enables you to securely implement single-sign-on (SSO) support against SaaS applications (including Office 365, SalesForce, Workday, Box, Google Apps, GitHub, etc) as well as LOB based applications (including ones built with the new Windows Azure AD support we shipped last week with ASP.NET and VS 2013). Since the initial preview we’ve enhanced our SAML federation capabilities, integrated our new password vaulting system, and shipped multi-factor authentication support. We've also turned on our outbound identity provisioning system and have it working with hundreds of additional SaaS Applications: Earlier this month we published an update on dates and pricing for when the service will be released in general availability form.  In this blog post we announced our intention to release the service in general availability form by the end of the year.  We also announced that the below features would be available in a free tier with it: SSO to every SaaS app we integrate with – Users can Single Sign On to any app we are integrated with at no charge. This includes all the top SAAS Apps and every app in our application gallery whether they use federation or password vaulting. Application access assignment and removal – IT Admins can assign access privileges to web applications to the users in their active directory assuring that every employee has access to the SAAS Apps they need. And when a user leaves the company or changes jobs, the admin can just as easily remove their access privileges assuring data security and minimizing IP loss User provisioning (and de-provisioning) – IT admins will be able to automatically provision users in 3rd party SaaS applications like Box, Salesforce.com, GoToMeeting, DropBox and others. We are working with key partners in the ecosystem to establish these connections, meaning you no longer have to continually update user records in multiple systems. Security and auditing reports – Security is a key priority for us. With the free version of these enhancements you'll get access to our standard set of access reports giving you visibility into which users are using which applications, when they were using them and where they are using them from. In addition, we'll alert you to un-usual usage patterns for instance when a user logs in from multiple locations at the same time. Our Application Access Panel – Users are logging in from every type of devices including Windows, iOS, & Android. Not all of these devices handle authentication in the same manner but the user doesn't care. They need to access their apps from the devices they love. Our Application Access Panel will support the ability for users to access access and launch their apps from any device and anywhere. You can learn more about our plans for application management with Windows Azure Active Directory here.  Try out the preview and start using it today. Enterprise Management: Use Active Directory to Better Manage Windows Azure Windows Azure Active Directory provides the ability to manage your organization in a directory which is hosted entirely in the cloud, or alternatively kept in sync with an on-premises Windows Server Active Directory solution (allowing you to seamlessly integrate with the directory you already have).  With today’s Windows Azure release we are integrating Windows Azure Active Directory even more within the core Windows Azure management experience, and enabling an even richer enterprise security offering.  Specifically: 1) All Windows Azure accounts now have a default Windows Azure Active Directory created for them.  You can create and map any users you want into this directory, and grant administrative rights to manage resources in Windows Azure to these users. 2) You can keep this directory entirely hosted in the cloud – or optionally sync it with your on-premises Windows Server Active Directory.  Both options are free.  The later approach is ideal for companies that wish to use their corporate user identities to sign-in and manage Windows Azure resources.  It also ensures that if an employee leaves an organization, his or her access control rights to the company’s Windows Azure resources are immediately revoked. 3) The Windows Azure Service Management APIs have been updated to support using Windows Azure Active Directory credentials to sign-in and perform management operations.  Prior to today’s release customers had to download and use management certificates (which were not scoped to individual users) to perform management operations.  We still support this management certificate approach (don’t worry – nothing will stop working).  But we think the new Windows Azure Active Directory authentication support enables an even easier and more secure way for customers to manage resources going forward.  4) The Windows Azure SDK 2.2 release (which is also shipping today) includes built-in support for the new Service Management APIs that authenticate with Windows Azure Active Directory, and now allow you to create and manage Windows Azure applications and resources directly within Visual Studio using your Active Directory credentials.  This, combined with updated PowerShell scripts that also support Active Directory, enables an end-to-end enterprise authentication story with Windows Azure. Below are some details on how all of this works: Subscriptions within a Directory As part of today’s update, we have associated all existing Window Azure accounts with a Windows Azure Active Directory (and created one for you if you don’t already have one). When you login to the Windows Azure Management Portal you’ll now see the directory name in the URI of the browser.  For example, in the screen-shot below you can see that I have a “scottgu” directory that my subscriptions are hosted within: Note that you can continue to use Microsoft Accounts (formerly known as Microsoft Live IDs) to sign-into Windows Azure.  These map just fine to a Windows Azure Active Directory – so there is no need to create new usernames that are specific to a directory if you don’t want to.  In the scenario above I’m actually logged in using my @hotmail.com based Microsoft ID which is now mapped to a “scottgu” active directory that was created for me.  By default everything will continue to work just like you used to before. Manage your Directory You can manage an Active Directory (including the one we now create for you by default) by clicking the “Active Directory” tab in the left-hand side of the portal.  This will list all of the directories in your account.  Clicking one the first time will display a getting started page that provides documentation and links to perform common tasks with it: You can use the built-in directory management support within the Windows Azure Management Portal to add/remove/manage users within the directory, enable multi-factor authentication, associate a custom domain (e.g. mycompanyname.com) with the directory, and/or rename the directory to whatever friendly name you want (just click the configure tab to do this).  You can also setup the directory to automatically sync with an on-premises Active Directory using the “Directory Integration” tab. Note that users within a directory by default do not have admin rights to login or manage Windows Azure based resources.  You still need to explicitly grant them co-admin permissions on a subscription for them to login or manage resources in Windows Azure.  You can do this by clicking the Settings tab on the left-hand side of the portal and then by clicking the administrators tab within it. Sign-In Integration within Visual Studio If you install the new Windows Azure SDK 2.2 release, you can now connect to Windows Azure from directly inside Visual Studio without having to download any management certificates.  You can now just right-click on the “Windows Azure” icon within the Server Explorer and choose the “Connect to Windows Azure” context menu option to do so: Doing this will prompt you to enter the email address of the username you wish to sign-in with (make sure this account is a user in your directory with co-admin rights on a subscription): You can use either a Microsoft Account (e.g. Windows Live ID) or an Active Directory based Organizational account as the email.  The dialog will update with an appropriate login prompt depending on which type of email address you enter: Once you sign-in you’ll see the Windows Azure resources that you have permissions to manage show up automatically within the Visual Studio server explorer and be available to start using: No downloading of management certificates required.  All of the authentication was handled using your Windows Azure Active Directory! Manage Subscriptions across Multiple Directories If you have already have multiple directories and multiple subscriptions within your Windows Azure account, we have done our best to create a good default mapping of your subscriptions->directories as part of today’s update.  If you don’t like the default subscription-to-directory mapping we have done you can click the Settings tab in the left-hand navigation of the Windows Azure Management Portal and browse to the Subscriptions tab within it: If you want to map a subscription under a different directory in your account, simply select the subscription from the list, and then click the “Edit Directory” button to choose which directory to map it to.  Mapping a subscription to a different directory takes only seconds and will not cause any of the resources within the subscription to recycle or stop working.  We’ve made the directory->subscription mapping process self-service so that you always have complete control and can map things however you want. Filtering By Directory and Subscription Within the Windows Azure Management Portal you can filter resources in the portal by subscription (allowing you to show/hide different subscriptions).  If you have subscriptions mapped to multiple directory tenants, we also now have a filter drop-down that allows you to filter the subscription list by directory tenant.  This filter is only available if you have multiple subscriptions mapped to multiple directories within your Windows Azure Account:   Windows Azure SDK 2.2 Today we are also releasing a major update of our Windows Azure SDK.  The Windows Azure SDK 2.2 release adds some great new features including: Visual Studio 2013 Support Integrated Windows Azure Sign-In support within Visual Studio Remote Debugging Cloud Services with Visual Studio Firewall Management support within Visual Studio for SQL Databases Visual Studio 2013 RTM VM Images for MSDN Subscribers Windows Azure Management Libraries for .NET Updated Windows Azure PowerShell Cmdlets and ScriptCenter I’ll post a follow-up blog shortly with more details about all of the above. Additional Updates In addition to the above enhancements, today’s release also includes a number of additional improvements: AutoScale: Richer time and date based scheduling support (set different rules on different dates) AutoScale: Ability to Scale to Zero Virtual Machines (very useful for Dev/Test scenarios) AutoScale: Support for time-based scheduling of Mobile Service AutoScale rules Operation Logs: Auditing support for Service Bus management operations Today we also shipped a major update to the Windows Azure SDK – Windows Azure SDK 2.2.  It has so much goodness in it that I have a whole second blog post coming shortly on it! :-) Summary Today’s Windows Azure release enables a bunch of great new scenarios, and enables a much richer enterprise authentication offering. If you don’t already have a Windows Azure account, you can sign-up for a free trial and start using all of the above features today.  Then visit the Windows Azure Developer Center to learn more about how to build apps with it. Hope this helps, Scott P.S. In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu

    Read the article

  • C#/.NET Little Wonders: The Concurrent Collections (1 of 3)

    - by James Michael Hare
    Once again we consider some of the lesser known classes and keywords of C#.  In the next few weeks, we will discuss the concurrent collections and how they have changed the face of concurrent programming. This week’s post will begin with a general introduction and discuss the ConcurrentStack<T> and ConcurrentQueue<T>.  Then in the following post we’ll discuss the ConcurrentDictionary<T> and ConcurrentBag<T>.  Finally, we shall close on the third post with a discussion of the BlockingCollection<T>. For more of the "Little Wonders" posts, see the index here. A brief history of collections In the beginning was the .NET 1.0 Framework.  And out of this framework emerged the System.Collections namespace, and it was good.  It contained all the basic things a growing programming language needs like the ArrayList and Hashtable collections.  The main problem, of course, with these original collections is that they held items of type object which means you had to be disciplined enough to use them correctly or you could end up with runtime errors if you got an object of a type you weren't expecting. Then came .NET 2.0 and generics and our world changed forever!  With generics the C# language finally got an equivalent of the very powerful C++ templates.  As such, the System.Collections.Generic was born and we got type-safe versions of all are favorite collections.  The List<T> succeeded the ArrayList and the Dictionary<TKey,TValue> succeeded the Hashtable and so on.  The new versions of the library were not only safer because they checked types at compile-time, in many cases they were more performant as well.  So much so that it's Microsoft's recommendation that the System.Collections original collections only be used for backwards compatibility. So we as developers came to know and love the generic collections and took them into our hearts and embraced them.  The problem is, thread safety in both the original collections and the generic collections can be problematic, for very different reasons. Now, if you are only doing single-threaded development you may not care – after all, no locking is required.  Even if you do have multiple threads, if a collection is “load-once, read-many” you don’t need to do anything to protect that container from multi-threaded access, as illustrated below: 1: public static class OrderTypeTranslator 2: { 3: // because this dictionary is loaded once before it is ever accessed, we don't need to synchronize 4: // multi-threaded read access 5: private static readonly Dictionary<string, char> _translator = new Dictionary<string, char> 6: { 7: {"New", 'N'}, 8: {"Update", 'U'}, 9: {"Cancel", 'X'} 10: }; 11:  12: // the only public interface into the dictionary is for reading, so inherently thread-safe 13: public static char? Translate(string orderType) 14: { 15: char charValue; 16: if (_translator.TryGetValue(orderType, out charValue)) 17: { 18: return charValue; 19: } 20:  21: return null; 22: } 23: } Unfortunately, most of our computer science problems cannot get by with just single-threaded applications or with multi-threading in a load-once manner.  Looking at  today's trends, it's clear to see that computers are not so much getting faster because of faster processor speeds -- we've nearly reached the limits we can push through with today's technologies -- but more because we're adding more cores to the boxes.  With this new hardware paradigm, it is even more important to use multi-threaded applications to take full advantage of parallel processing to achieve higher application speeds. So let's look at how to use collections in a thread-safe manner. Using historical collections in a concurrent fashion The early .NET collections (System.Collections) had a Synchronized() static method that could be used to wrap the early collections to make them completely thread-safe.  This paradigm was dropped in the generic collections (System.Collections.Generic) because having a synchronized wrapper resulted in atomic locks for all operations, which could prove overkill in many multithreading situations.  Thus the paradigm shifted to having the user of the collection specify their own locking, usually with an external object: 1: public class OrderAggregator 2: { 3: private static readonly Dictionary<string, List<Order>> _orders = new Dictionary<string, List<Order>>(); 4: private static readonly _orderLock = new object(); 5:  6: public void Add(string accountNumber, Order newOrder) 7: { 8: List<Order> ordersForAccount; 9:  10: // a complex operation like this should all be protected 11: lock (_orderLock) 12: { 13: if (!_orders.TryGetValue(accountNumber, out ordersForAccount)) 14: { 15: _orders.Add(accountNumber, ordersForAccount = new List<Order>()); 16: } 17:  18: ordersForAccount.Add(newOrder); 19: } 20: } 21: } Notice how we’re performing several operations on the dictionary under one lock.  With the Synchronized() static methods of the early collections, you wouldn’t be able to specify this level of locking (a more macro-level).  So in the generic collections, it was decided that if a user needed synchronization, they could implement their own locking scheme instead so that they could provide synchronization as needed. The need for better concurrent access to collections Here’s the problem: it’s relatively easy to write a collection that locks itself down completely for access, but anything more complex than that can be difficult and error-prone to write, and much less to make it perform efficiently!  For example, what if you have a Dictionary that has frequent reads but in-frequent updates?  Do you want to lock down the entire Dictionary for every access?  This would be overkill and would prevent concurrent reads.  In such cases you could use something like a ReaderWriterLockSlim which allows for multiple readers in a lock, and then once a writer grabs the lock it blocks all further readers until the writer is done (in a nutshell).  This is all very complex stuff to consider. Fortunately, this is where the Concurrent Collections come in.  The Parallel Computing Platform team at Microsoft went through great pains to determine how to make a set of concurrent collections that would have the best performance characteristics for general case multi-threaded use. Now, as in all things involving threading, you should always make sure you evaluate all your container options based on the particular usage scenario and the degree of parallelism you wish to acheive. This article should not be taken to understand that these collections are always supperior to the generic collections. Each fills a particular need for a particular situation. Understanding what each container is optimized for is key to the success of your application whether it be single-threaded or multi-threaded. General points to consider with the concurrent collections The MSDN points out that the concurrent collections all support the ICollection interface. However, since the collections are already synchronized, the IsSynchronized property always returns false, and SyncRoot always returns null.  Thus you should not attempt to use these properties for synchronization purposes. Note that since the concurrent collections also may have different operations than the traditional data structures you may be used to.  Now you may ask why they did this, but it was done out of necessity to keep operations safe and atomic.  For example, in order to do a Pop() on a stack you have to know the stack is non-empty, but between the time you check the stack’s IsEmpty property and then do the Pop() another thread may have come in and made the stack empty!  This is why some of the traditional operations have been changed to make them safe for concurrent use. In addition, some properties and methods in the concurrent collections achieve concurrency by creating a snapshot of the collection, which means that some operations that were traditionally O(1) may now be O(n) in the concurrent models.  I’ll try to point these out as we talk about each collection so you can be aware of any potential performance impacts.  Finally, all the concurrent containers are safe for enumeration even while being modified, but some of the containers support this in different ways (snapshot vs. dirty iteration).  Once again I’ll highlight how thread-safe enumeration works for each collection. ConcurrentStack<T>: The thread-safe LIFO container The ConcurrentStack<T> is the thread-safe counterpart to the System.Collections.Generic.Stack<T>, which as you may remember is your standard last-in-first-out container.  If you think of algorithms that favor stack usage (for example, depth-first searches of graphs and trees) then you can see how using a thread-safe stack would be of benefit. The ConcurrentStack<T> achieves thread-safe access by using System.Threading.Interlocked operations.  This means that the multi-threaded access to the stack requires no traditional locking and is very, very fast! For the most part, the ConcurrentStack<T> behaves like it’s Stack<T> counterpart with a few differences: Pop() was removed in favor of TryPop() Returns true if an item existed and was popped and false if empty. PushRange() and TryPopRange() were added Allows you to push multiple items and pop multiple items atomically. Count takes a snapshot of the stack and then counts the items. This means it is a O(n) operation, if you just want to check for an empty stack, call IsEmpty instead which is O(1). ToArray() and GetEnumerator() both also take snapshots. This means that iteration over a stack will give you a static view at the time of the call and will not reflect updates. Pushing on a ConcurrentStack<T> works just like you’d expect except for the aforementioned PushRange() method that was added to allow you to push a range of items concurrently. 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: // but you can also push multiple items in one atomic operation (no interleaves) 7: stack.PushRange(new [] { "Second", "Third", "Fourth" }); For looking at the top item of the stack (without removing it) the Peek() method has been removed in favor of a TryPeek().  This is because in order to do a peek the stack must be non-empty, but between the time you check for empty and the time you execute the peek the stack contents may have changed.  Thus the TryPeek() was created to be an atomic check for empty, and then peek if not empty: 1: // to look at top item of stack without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (stack.TryPeek(out item)) 5: { 6: Console.WriteLine("Top item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Stack was empty."); 11: } Finally, to remove items from the stack, we have the TryPop() for single, and TryPopRange() for multiple items.  Just like the TryPeek(), these operations replace Pop() since we need to ensure atomically that the stack is non-empty before we pop from it: 1: // to remove items, use TryPop or TryPopRange to get multiple items atomically (no interleaves) 2: if (stack.TryPop(out item)) 3: { 4: Console.WriteLine("Popped " + item); 5: } 6:  7: // TryPopRange will only pop up to the number of spaces in the array, the actual number popped is returned. 8: var poppedItems = new string[2]; 9: int numPopped = stack.TryPopRange(poppedItems); 10:  11: foreach (var theItem in poppedItems.Take(numPopped)) 12: { 13: Console.WriteLine("Popped " + theItem); 14: } Finally, note that as stated before, GetEnumerator() and ToArray() gets a snapshot of the data at the time of the call.  That means if you are enumerating the stack you will get a snapshot of the stack at the time of the call.  This is illustrated below: 1: var stack = new ConcurrentStack<string>(); 2:  3: // adding to stack is much the same as before 4: stack.Push("First"); 5:  6: var results = stack.GetEnumerator(); 7:  8: // but you can also push multiple items in one atomic operation (no interleaves) 9: stack.PushRange(new [] { "Second", "Third", "Fourth" }); 10:  11: while(results.MoveNext()) 12: { 13: Console.WriteLine("Stack only has: " + results.Current); 14: } The only item that will be printed out in the above code is "First" because the snapshot was taken before the other items were added. This may sound like an issue, but it’s really for safety and is more correct.  You don’t want to enumerate a stack and have half a view of the stack before an update and half a view of the stack after an update, after all.  In addition, note that this is still thread-safe, whereas iterating through a non-concurrent collection while updating it in the old collections would cause an exception. ConcurrentQueue<T>: The thread-safe FIFO container The ConcurrentQueue<T> is the thread-safe counterpart of the System.Collections.Generic.Queue<T> class.  The concurrent queue uses an underlying list of small arrays and lock-free System.Threading.Interlocked operations on the head and tail arrays.  Once again, this allows us to do thread-safe operations without the need for heavy locks! The ConcurrentQueue<T> (like the ConcurrentStack<T>) has some departures from the non-concurrent counterpart.  Most notably: Dequeue() was removed in favor of TryDequeue(). Returns true if an item existed and was dequeued and false if empty. Count does not take a snapshot It subtracts the head and tail index to get the count.  This results overall in a O(1) complexity which is quite good.  It’s still recommended, however, that for empty checks you call IsEmpty instead of comparing Count to zero. ToArray() and GetEnumerator() both take snapshots. This means that iteration over a queue will give you a static view at the time of the call and will not reflect updates. The Enqueue() method on the ConcurrentQueue<T> works much the same as the generic Queue<T>: 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5: queue.Enqueue("Second"); 6: queue.Enqueue("Third"); For front item access, the TryPeek() method must be used to attempt to see the first item if the queue.  There is no Peek() method since, as you’ll remember, we can only peek on a non-empty queue, so we must have an atomic TryPeek() that checks for empty and then returns the first item if the queue is non-empty. 1: // to look at first item in queue without removing it, can use TryPeek. 2: // Note that there is no Peek(), this is because you need to check for empty first. TryPeek does. 3: string item; 4: if (queue.TryPeek(out item)) 5: { 6: Console.WriteLine("First item was " + item); 7: } 8: else 9: { 10: Console.WriteLine("Queue was empty."); 11: } Then, to remove items you use TryDequeue().  Once again this is for the same reason we have TryPeek() and not Peek(): 1: // to remove items, use TryDequeue. If queue is empty returns false. 2: if (queue.TryDequeue(out item)) 3: { 4: Console.WriteLine("Dequeued first item " + item); 5: } Just like the concurrent stack, the ConcurrentQueue<T> takes a snapshot when you call ToArray() or GetEnumerator() which means that subsequent updates to the queue will not be seen when you iterate over the results.  Thus once again the code below will only show the first item, since the other items were added after the snapshot. 1: var queue = new ConcurrentQueue<string>(); 2:  3: // adding to queue is much the same as before 4: queue.Enqueue("First"); 5:  6: var iterator = queue.GetEnumerator(); 7:  8: queue.Enqueue("Second"); 9: queue.Enqueue("Third"); 10:  11: // only shows First 12: while (iterator.MoveNext()) 13: { 14: Console.WriteLine("Dequeued item " + iterator.Current); 15: } Using collections concurrently You’ll notice in the examples above I stuck to using single-threaded examples so as to make them deterministic and the results obvious.  Of course, if we used these collections in a truly multi-threaded way the results would be less deterministic, but would still be thread-safe and with no locking on your part required! For example, say you have an order processor that takes an IEnumerable<Order> and handles each other in a multi-threaded fashion, then groups the responses together in a concurrent collection for aggregation.  This can be done easily with the TPL’s Parallel.ForEach(): 1: public static IEnumerable<OrderResult> ProcessOrders(IEnumerable<Order> orderList) 2: { 3: var proxy = new OrderProxy(); 4: var results = new ConcurrentQueue<OrderResult>(); 5:  6: // notice that we can process all these in parallel and put the results 7: // into our concurrent collection without needing any external locking! 8: Parallel.ForEach(orderList, 9: order => 10: { 11: var result = proxy.PlaceOrder(order); 12:  13: results.Enqueue(result); 14: }); 15:  16: return results; 17: } Summary Obviously, if you do not need multi-threaded safety, you don’t need to use these collections, but when you do need multi-threaded collections these are just the ticket! The plethora of features (I always think of the movie The Three Amigos when I say plethora) built into these containers and the amazing way they acheive thread-safe access in an efficient manner is wonderful to behold. Stay tuned next week where we’ll continue our discussion with the ConcurrentBag<T> and the ConcurrentDictionary<TKey,TValue>. For some excellent information on the performance of the concurrent collections and how they perform compared to a traditional brute-force locking strategy, see this wonderful whitepaper by the Microsoft Parallel Computing Platform team here.   Tweet Technorati Tags: C#,.NET,Concurrent Collections,Collections,Multi-Threading,Little Wonders,BlackRabbitCoder,James Michael Hare

    Read the article

  • A Visual Studio tool eliminating the need to rewrite for web and mobile

    - by Visual WebGui
    We have already covered the BYOD requirements that an application developer is faced with, in an earlier blog entry ( How to Bring Your Own Device (BYOD) to a .NET application ). In that entry we emphasized the fact that application developers will need to prepare their applications for serving multiple types of devices on multiple platforms, ranging from the smallest mobile devices up to and beyond the largest desktop devices. The experts prediction is that in the near future we will see that the...(read more)

    Read the article

< Previous Page | 806 807 808 809 810 811 812 813 814 815 816 817  | Next Page >