Search Results

Search found 13160 results on 527 pages for 'response redirect'.

Page 269/527 | < Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >

  • PeerApp Scalability

    - by ChaosFreak
    William, In response to a question on P2P caching, you answered "PeerApp can do that but probably doesn't suit the scale you are looking at." PeerApp is the most scalable P2P cache in the world, and can handle hundreds of Gb per second of bandwidth. Their largest deployment in Taiwan handles 120Gbps with no problem. The next largest competitor, OverSi, can barely handle a tenth of that. Where do you get your information that PeerApp "doesn't suit scale"?

    Read the article

  • Hi, anyone met the sending issue like this? By using powershell

    - by pansal
    My script is about sending notfication email, and it was running well on my local machine, but when I removed it to an server 2k3, the email cannot be sent out with below error log: Exception calling "Send" with "1" argument(s): "The SMTP server requires a secure connection or the client was not authenticated. The server response was: 5.7.1 Client was not authenticated" At line:1 char:19 + $smtp_buglist.Send <<<< ($mail_buglist) + CategoryInfo : NotSpecified: (:) [], MethodInvocationException + FullyQualifiedErrorId : DotNetMethodException Please help me out of this, I am confused.

    Read the article

  • Virtual PC in Remote Desktop session runs very slowly??

    - by Michael Bray
    I have a VPN to my work which is quite fast... I Remote Desktop to my work PC, which is running a Microsoft Virtual PC. Working with the PC while I'm actually at work isn't too bad, but when I try to interact with it over the remote desktop, it is VERY slow to respond. Even simple typing can be slow, but screen painting and response time is painfully obvious. Any suggestions to help speed it up?

    Read the article

  • JSON + 3des encrypt dosen't work

    - by oscurodrago
    i'm trying to pass some JSON data encrypted to my app but seem when i decrypt it my script add some 00 to hexe code making it impossible to be serialized i've tried to pass data uncrypted and crypted and the only difference i found is 00 at the end that is how i read JSON if isn't Encrypted NSLog(@"A) %@", response ); NSMutableArray *json = [NSJSONSerialization JSONObjectWithData:response options:kNilOptions error:&error]; NSLog(@"B) %@", error); that is NSLog Output 2012-03-28 00:43:58.399 VGCollect[5583:f803] A) <5b7b2274 69746c65 223a2252 696c6173 63696174 61206c61 20707269 6d612045 7370616e 73696f6e 65206469 20526167 6e61726f 6b204f64 79737365 79222c22 63617465 676f7279 223a225b 20505356 69746120 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3536345f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2250 68616e74 61737920 53746172 204f6e6c 696e6520 3220616e 63686520 73752069 4f532065 20416e64 726f6964 20222c22 63617465 676f7279 223a225b 20495048 4f4e452c 2050432c 20505356 69746120 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3437355f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2250 72696d69 20536372 65656e73 686f7420 696e2061 6c747261 20726973 6f6c757a 696f6e65 20706572 202e6861 636b5c2f 5c2f7665 72737573 222c2263 61746567 6f727922 3a225b20 50533320 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3437325f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2269 6c206e75 6f766f20 444c4320 64692046 696e616c 2046616e 74617379 20584949 492d3220 636f6e66 65726d61 746f2061 6e636865 20696e20 416d6572 69636122 2c226361 7465676f 7279223a 225b2050 53332c20 58333630 205d222c 22696d67 55726c22 3a22494d 41474553 5c2f3230 31325c2f 30335c2f 30303030 38343439 5f313530 7838302e 6a706722 7d2c7b22 7469746c 65223a22 52656769 73747261 746f2069 6c206d61 72636869 6f205461 6c657320 6f662058 696c6c69 6120696e 20457572 6f706122 2c226361 7465676f 7279223a 225b2050 5333205d 222c2269 6d675572 6c223a22 494d4147 45535c2f 32303132 5c2f3033 5c2f3030 30303834 34385f31 35307838 302e6a70 67227d2c 7b227469 746c6522 3a225376 656c6174 69202e68 61636b5c 2f5c2f56 65727375 732e2065 202e6861 636b5c2f 5c2f5365 6b616920 6e6f204d 756b6f75 206e6922 2c226361 7465676f 7279223a 225b2050 5333205d 222c2269 6d675572 6c223a22 494d4147 45535c2f 32303132 5c2f3033 5c2f3030 30303834 33345f31 35307838 302e6a70 67227d2c 7b227469 746c6522 3a22476f 6e205061 6b752050 616b7520 50616b75 2050616b 75204164 76656e74 75726520 756e206e 756f766f 2067696f 636f2064 69204e61 6d636f20 42616e64 6169222c 22636174 65676f72 79223a22 5b203344 53205d22 2c22696d 6755726c 223a2249 4d414745 535c2f32 3031325c 2f30335c 2f303030 30383432 385f3135 30783830 2e6a7067 227d2c7b 22746974 6c65223a 224e756f 76652069 6d6d6167 696e6920 6520696e 666f2064 69204669 72652045 6d626c65 6d204177 616b656e 696e6722 2c226361 7465676f 7279223a 225b2033 4453205d 222c2269 6d675572 6c223a22 494d4147 45535c2f 32303132 5c2f3033 5c2f3030 30303834 31365f31 35307838 302e6a70 67227d2c 7b227469 746c6522 3a225072 696d6520 696d6d61 67696e69 20646569 206e756f 76692044 4c432064 69204669 6e616c20 46616e74 61737920 58494949 2d32222c 22636174 65676f72 79223a22 5b205053 332c2058 33363020 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3431325f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2249 6e206172 7269766f 20717565 73746f20 41757475 6e6e6f20 45706963 204d6963 6b657920 323a2054 68652050 6f776572 206f6620 54776f22 2c226361 7465676f 7279223a 225b2050 53332c20 5769692c 20583336 30205d22 2c22696d 6755726c 223a2249 4d414745 535c2f32 3031325c 2f30335c 2f303030 30383430 385f3135 30783830 2e6a7067 227d5d> 2012-03-28 00:43:58.410 VGCollect[5583:f803] B) (null) instead that is how i serialize my crypted JSON NSData *decryptBase64 = [GTMBase64 decodeData:response]; NSData *decrypt3DES = [Crypt TripleDES:decryptBase64 encryptOrDecrypt:kCCDecrypt key:@"2b9534b45611cbb2436e625d"]; NSLog(@"A) %@", decrypt3DES ); NSMutableArray *json = [NSJSONSerialization JSONObjectWithData:decrypt3DES options:kNilOptions error:&error]; NSLog(@"B) %@", error); and that is the NSLog output 2012-03-28 00:41:51.525 VGCollect[5529:f803] A) <5b7b2274 69746c65 223a2252 696c6173 63696174 61206c61 20707269 6d612045 7370616e 73696f6e 65206469 20526167 6e61726f 6b204f64 79737365 79222c22 63617465 676f7279 223a225b 20505356 69746120 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3536345f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2250 68616e74 61737920 53746172 204f6e6c 696e6520 3220616e 63686520 73752069 4f532065 20416e64 726f6964 20222c22 63617465 676f7279 223a225b 20495048 4f4e452c 2050432c 20505356 69746120 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3437355f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2250 72696d69 20536372 65656e73 686f7420 696e2061 6c747261 20726973 6f6c757a 696f6e65 20706572 202e6861 636b5c2f 5c2f7665 72737573 222c2263 61746567 6f727922 3a225b20 50533320 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3437325f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2269 6c206e75 6f766f20 444c4320 64692046 696e616c 2046616e 74617379 20584949 492d3220 636f6e66 65726d61 746f2061 6e636865 20696e20 416d6572 69636122 2c226361 7465676f 7279223a 225b2050 53332c20 58333630 205d222c 22696d67 55726c22 3a22494d 41474553 5c2f3230 31325c2f 30335c2f 30303030 38343439 5f313530 7838302e 6a706722 7d2c7b22 7469746c 65223a22 52656769 73747261 746f2069 6c206d61 72636869 6f205461 6c657320 6f662058 696c6c69 6120696e 20457572 6f706122 2c226361 7465676f 7279223a 225b2050 5333205d 222c2269 6d675572 6c223a22 494d4147 45535c2f 32303132 5c2f3033 5c2f3030 30303834 34385f31 35307838 302e6a70 67227d2c 7b227469 746c6522 3a225376 656c6174 69202e68 61636b5c 2f5c2f56 65727375 732e2065 202e6861 636b5c2f 5c2f5365 6b616920 6e6f204d 756b6f75 206e6922 2c226361 7465676f 7279223a 225b2050 5333205d 222c2269 6d675572 6c223a22 494d4147 45535c2f 32303132 5c2f3033 5c2f3030 30303834 33345f31 35307838 302e6a70 67227d2c 7b227469 746c6522 3a22476f 6e205061 6b752050 616b7520 50616b75 2050616b 75204164 76656e74 75726520 756e206e 756f766f 2067696f 636f2064 69204e61 6d636f20 42616e64 6169222c 22636174 65676f72 79223a22 5b203344 53205d22 2c22696d 6755726c 223a2249 4d414745 535c2f32 3031325c 2f30335c 2f303030 30383432 385f3135 30783830 2e6a7067 227d2c7b 22746974 6c65223a 224e756f 76652069 6d6d6167 696e6920 6520696e 666f2064 69204669 72652045 6d626c65 6d204177 616b656e 696e6722 2c226361 7465676f 7279223a 225b2033 4453205d 222c2269 6d675572 6c223a22 494d4147 45535c2f 32303132 5c2f3033 5c2f3030 30303834 31365f31 35307838 302e6a70 67227d2c 7b227469 746c6522 3a225072 696d6520 696d6d61 67696e69 20646569 206e756f 76692044 4c432064 69204669 6e616c20 46616e74 61737920 58494949 2d32222c 22636174 65676f72 79223a22 5b205053 332c2058 33363020 5d222c22 696d6755 726c223a 22494d41 4745535c 2f323031 325c2f30 335c2f30 30303038 3431325f 31353078 38302e6a 7067227d 2c7b2274 69746c65 223a2249 6e206172 7269766f 20717565 73746f20 41757475 6e6e6f20 45706963 204d6963 6b657920 323a2054 68652050 6f776572 206f6620 54776f22 2c226361 7465676f 7279223a 225b2050 53332c20 5769692c 20583336 30205d22 2c22696d 6755726c 223a2249 4d414745 535c2f32 3031325c 2f30335c 2f303030 30383430 385f3135 30783830 2e6a7067 227d5d00> 2012-03-28 00:41:51.530 VGCollect[5529:f803] B) Error Domain=NSCocoaErrorDomain Code=3840 "The operation couldn’t be completed. (Cocoa error 3840.)" (Garbage at end.) UserInfo=0x6ccdd40 {NSDebugDescription=Garbage at end.} as u can see the only difference between 2 NSlogs is 00 at the end of second hexcode and the error on serialization of it do u know how fix that ? idk if is needed, that is my script for encrypt data +(NSData*)TripleDES:(NSData*)plainData encryptOrDecrypt:(CCOperation)encryptOrDecrypt key:(NSString*)key { const void *vplainText; size_t plainTextBufferSize; plainTextBufferSize = [plainData length]; vplainText = (const void *)[plainData bytes]; CCCryptorStatus ccStatus; uint8_t *bufferPtr = NULL; size_t bufferPtrSize = 0; size_t movedBytes = 0; // uint8_t ivkCCBlockSize3DES; bufferPtrSize = (plainTextBufferSize + kCCBlockSize3DES) & ~(kCCBlockSize3DES - 1); bufferPtr = malloc( bufferPtrSize * sizeof(uint8_t)); memset((void *)bufferPtr, 0x0, bufferPtrSize); // memset((void *) iv, 0x0, (size_t) sizeof(iv)); // NSString *key = @"123456789012345678901234"; NSString *initVec = @"init Vec"; const void *vkey = (const void *) [key UTF8String]; const void *vinitVec = (const void *) [initVec UTF8String]; ccStatus = CCCrypt(encryptOrDecrypt, kCCAlgorithm3DES, (kCCOptionPKCS7Padding | kCCOptionECBMode) , vkey, //"123456789012345678901234", //key kCCKeySize3DES, vinitVec, //"init Vec", //iv, vplainText, //"Your Name", //plainText, plainTextBufferSize, (void *)bufferPtr, bufferPtrSize, &movedBytes); NSData *result = [NSData dataWithBytes:(const void *)bufferPtr length:(NSUInteger)movedBytes]; return result; }

    Read the article

  • Problem in In-App purchase-consumable model

    - by kunal-dutta
    I have created a nonconsumable in app purchase item and now I want to create a consumable in-app purchase by which a user to buy it every time he uses it,and also I want to create a subscription model In-App purchase. Everything works as expected except when I buy the item more than one time, iPhone pop ups a message saying "You've already purchased the item. Do You want to buy it again?". Is It possible to disable this dialog and proceed to the actual purchase?And what will have to change in following code with different model:- in InApp purchase manager.m: @implementation InAppPurchaseManager //@synthesize purchasableObjects; //@synthesize storeObserver; @synthesize proUpgradeProduct; @synthesize productsRequest; //BOOL featureAPurchased; //BOOL featureBPurchased; //static InAppPurchaseManager* _sharedStoreManager; // self (void)dealloc { //[_sharedStoreManager release]; //[storeObserver release]; [super dealloc]; } (void)requestProUpgradeProductData { NSSet *productIdentifiers = [NSSet setWithObject:@"com.vigyaapan.iWorkOut1" ]; productsRequest = [[SKProductsRequest alloc] initWithProductIdentifiers:productIdentifiers]; productsRequest.delegate = self; [productsRequest start]; // we will release the request object in the delegate callback } pragma mark - pragma mark SKProductsRequestDelegate methods (void)productsRequest:(SKProductsRequest *)request didReceiveResponse:(SKProductsResponse *)response { //NSArray *products = response.products; //proUpgradeProduct = [products count] == 1 ? [[products firstObject] retain]: nil; if (proUpgradeProduct) { NSLog(@"Product title: %@", proUpgradeProduct.localizedTitle); NSLog(@"Product description: %@", proUpgradeProduct.localizedDescription); NSLog(@"Product price: %@", proUpgradeProduct.price); NSLog(@"Product id:%@", proUpgradeProduct.productIdentifier); } /*for (NSString invalidProductId in response.invalidProductIdentifiers) { NSLog(@"Invalid product id: %@" , invalidProductId); }/ //finally release the reqest we alloc/init’ed in requestProUpgradeProductData [productsRequest release]; [[NSNotificationCenter defaultCenter] postNotificationName:kInAppPurchaseManagerProductsFetchedNotification object:self userInfo:nil]; } pragma - pragma Public methods /* call this method once on startup*/ (void)loadStore { /* restarts any purchases if they were interrupted last time the app was open*/ [[SKPaymentQueue defaultQueue] addTransactionObserver:self]; /* get the product description (defined in early sections)*/ [self requestProUpgradeProductData]; } /* call this before making a purchase*/ (BOOL)canMakePurchases { return [SKPaymentQueue canMakePayments]; } /* kick off the upgrade transaction*/ (void)purchaseProUpgrade { SKPayment *payment = [SKPayment paymentWithProductIdentifier:@"9820091347"]; [[SKPaymentQueue defaultQueue] addPayment:payment]; } pragma - pragma Purchase helpers /* saves a record of the transaction by storing the receipt to disk*/ (void)recordTransaction:(SKPaymentTransaction )transaction { if ([transaction.payment.productIdentifier isEqualToString:kInAppPurchaseProUpgradeProductId]) { / save the transaction receipt to disk*/ [[NSUserDefaults standardUserDefaults] setValue:transaction.transactionReceipt forKey:@"proUpgradeTransactionReceipt" ]; [[NSUserDefaults standardUserDefaults] synchronize]; } } /* enable pro features*/ (void)provideContent:(NSString )productId { if ([productId isEqualToString:kInAppPurchaseProUpgradeProductId]) { / enable the pro features*/ [[NSUserDefaults standardUserDefaults] setBool:YES forKey:@"isProUpgradePurchased" ]; [[NSUserDefaults standardUserDefaults] synchronize]; } } (void)finishTransaction:(SKPaymentTransaction )transaction wasSuccessful:(BOOL)wasSuccessful { // / remove the transaction from the payment queue.*/ [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; NSDictionary *userInfo = [NSDictionary dictionaryWithObjectsAndKeys:transaction, @"transaction" , nil]; if (wasSuccessful) { /* send out a notification that we’ve finished the transaction*/ [[NSNotificationCenter defaultCenter]postNotificationName:kInAppPurchaseManagerTransactionSucceededNotification object:self userInfo:userInfo]; } else { /* send out a notification for the failed transaction*/ [[NSNotificationCenter defaultCenter] postNotificationName:kInAppPurchaseManagerTransactionFailedNotification object:self userInfo:userInfo]; } } (void)completeTransaction:(SKPaymentTransaction *)transaction { [self recordTransaction:transaction]; [self provideContent:transaction.payment.productIdentifier]; [self finishTransaction:transaction wasSuccessful:YES]; } (void)restoreTransaction:(SKPaymentTransaction *)transaction { [self recordTransaction:transaction.originalTransaction]; [self provideContent:transaction.originalTransaction.payment.productIdentifier]; [self finishTransaction:transaction wasSuccessful:YES]; } (void)failedTransaction:(SKPaymentTransaction )transaction { if (transaction.error.code != SKErrorPaymentCancelled) { / error!/ [self finishTransaction:transaction wasSuccessful:NO]; } else { / this is fine, the user just cancelled, so don’t notify*/ [[SKPaymentQueue defaultQueue] finishTransaction:transaction]; } } (void)paymentQueue:(SKPaymentQueue *)queue updatedTransactions:(NSArray *)transactions { for (SKPaymentTransaction *transaction in transactions) { switch (transaction.transactionState) { case SKPaymentTransactionStatePurchased: [self completeTransaction:transaction]; break; case SKPaymentTransactionStateFailed: [self failedTransaction:transaction]; break; case SKPaymentTransactionStateRestored: [self restoreTransaction:transaction]; break; default: break; } } } @end in SKProduct.m:- @implementation SKProduct (LocalizedPrice) - (NSString *)localizedPrice { NSNumberFormatter *numberFormatter = [[NSNumberFormatter alloc] init]; [numberFormatter setFormatterBehavior:NSNumberFormatterBehavior10_4]; [numberFormatter setNumberStyle:NSNumberFormatterCurrencyStyle]; [numberFormatter setLocale:self.priceLocale]; NSString *formattedString = [numberFormatter stringFromNumber:self.price]; [numberFormatter release]; return formattedString; }

    Read the article

  • Few Google Checkout Questions

    - by Richard Knop
    I am planning to integrate a Google Checkout payment system on a social networking website. The idea is that members can buy "tokens" for real money (which are sort of the website currency) and then they can buy access to some extra content on the website etc. What I want to do is create a Google Checkout button that takes a member to the checkout page where he pays with his credit or debit card. What I want is the Google Checkout to notify notify my server whether the purchase of tokens was successful (if the credit/debit card was charged) so I can update the local database. The website is coded in PHP/MySQL. I have downloaded the sample PHP code from here: code.google.com/p/google-checkout-php-sample-code/wiki/Documentation I know how to create a Google checkout button and I have also placed the responsehandlerdemo.php file on my server. This is the file the Google Checkout is supposed to send response to (of course I set the path to the file in Google merchant account). Now in the response handler file there is a switch block with several case statements. Which one means that the payment was successful and I can add tokens to the member account in the local database? switch ($root) { case "request-received": { break; } case "error": { break; } case "diagnosis": { break; } case "checkout-redirect": { break; } case "merchant-calculation-callback": { // Create the results and send it $merchant_calc = new GoogleMerchantCalculations($currency); // Loop through the list of address ids from the callback $addresses = get_arr_result($data[$root]['calculate']['addresses']['anonymous-address']); foreach($addresses as $curr_address) { $curr_id = $curr_address['id']; $country = $curr_address['country-code']['VALUE']; $city = $curr_address['city']['VALUE']; $region = $curr_address['region']['VALUE']; $postal_code = $curr_address['postal-code']['VALUE']; // Loop through each shipping method if merchant-calculated shipping // support is to be provided if(isset($data[$root]['calculate']['shipping'])) { $shipping = get_arr_result($data[$root]['calculate']['shipping']['method']); foreach($shipping as $curr_ship) { $name = $curr_ship['name']; //Compute the price for this shipping method and address id $price = 12; // Modify this to get the actual price $shippable = "true"; // Modify this as required $merchant_result = new GoogleResult($curr_id); $merchant_result->SetShippingDetails($name, $price, $shippable); if($data[$root]['calculate']['tax']['VALUE'] == "true") { //Compute tax for this address id and shipping type $amount = 15; // Modify this to the actual tax value $merchant_result->SetTaxDetails($amount); } if(isset($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string'])) { $codes = get_arr_result($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string']); foreach($codes as $curr_code) { //Update this data as required to set whether the coupon is valid, the code and the amount $coupons = new GoogleCoupons("true", $curr_code['code'], 5, "test2"); $merchant_result->AddCoupons($coupons); } } $merchant_calc->AddResult($merchant_result); } } else { $merchant_result = new GoogleResult($curr_id); if($data[$root]['calculate']['tax']['VALUE'] == "true") { //Compute tax for this address id and shipping type $amount = 15; // Modify this to the actual tax value $merchant_result->SetTaxDetails($amount); } $codes = get_arr_result($data[$root]['calculate']['merchant-code-strings'] ['merchant-code-string']); foreach($codes as $curr_code) { //Update this data as required to set whether the coupon is valid, the code and the amount $coupons = new GoogleCoupons("true", $curr_code['code'], 5, "test2"); $merchant_result->AddCoupons($coupons); } $merchant_calc->AddResult($merchant_result); } } $Gresponse->ProcessMerchantCalculations($merchant_calc); break; } case "new-order-notification": { $Gresponse->SendAck(); break; } case "order-state-change-notification": { $Gresponse->SendAck(); $new_financial_state = $data[$root]['new-financial-order-state']['VALUE']; $new_fulfillment_order = $data[$root]['new-fulfillment-order-state']['VALUE']; switch($new_financial_state) { case 'REVIEWING': { break; } case 'CHARGEABLE': { //$Grequest->SendProcessOrder($data[$root]['google-order-number']['VALUE']); //$Grequest->SendChargeOrder($data[$root]['google-order-number']['VALUE'],''); break; } case 'CHARGING': { break; } case 'CHARGED': { break; } case 'PAYMENT_DECLINED': { break; } case 'CANCELLED': { break; } case 'CANCELLED_BY_GOOGLE': { //$Grequest->SendBuyerMessage($data[$root]['google-order-number']['VALUE'], // "Sorry, your order is cancelled by Google", true); break; } default: break; } switch($new_fulfillment_order) { case 'NEW': { break; } case 'PROCESSING': { break; } case 'DELIVERED': { break; } case 'WILL_NOT_DELIVER': { break; } default: break; } break; } case "charge-amount-notification": { //$Grequest->SendDeliverOrder($data[$root]['google-order-number']['VALUE'], // <carrier>, <tracking-number>, <send-email>); //$Grequest->SendArchiveOrder($data[$root]['google-order-number']['VALUE'] ); $Gresponse->SendAck(); break; } case "chargeback-amount-notification": { $Gresponse->SendAck(); break; } case "refund-amount-notification": { $Gresponse->SendAck(); break; } case "risk-information-notification": { $Gresponse->SendAck(); break; } default: $Gresponse->SendBadRequestStatus("Invalid or not supported Message"); break; } I guess that case 'CHARGED' is the one, am I right? Second question, do I need an SSL certificate to receive response from Google Checkout? According to this I do: groups.google.com/group/google-checkout-api-php/browse_thread/thread/10ce55177281c2b0 But I don's see it mentioned anywhere in the official documentation. Thank you.

    Read the article

  • Cannot add namespace prefix to children using XSL

    - by Erdal
    I checked many answers here and I think I am almost there. One thing that is bugging me (and for some reason my peer needs it) follows: I have the following input XML: <?xml version="1.0" encoding="utf-8"?> <MyRoot> <MyRequest CompletionCode="0" CustomerID="9999999999"/> <List TotalList="1"> <Order CustomerID="999999999" OrderNo="0000000001" Status="Shipped"> <BillToAddress ZipCode="22221"/> <ShipToAddress ZipCode="22222"/> <Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </Order> </List> <Errors/> </MyRoot> I was asked to produce this: <ns:MyNewRoot xmlns:ns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <N1:MyRequest CompletionCode="0" CustomerID="9999999999"/> <ns:List TotalList="1"> <N2:Order CustomerID="999999999" Level="Preferred" Status="Shipped"> <N2:BillToAddress ZipCode="22221"/> <N2:ShipToAddress ZipCode="22222"/> <N2:Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </N2:Order> </ns:List> <ns:Errors/> </ns:MyNewRoot> Note the children of the N2:Order also needs N2: prefix as well as the ns: prefix for the rest of the elements. I use the XSL transformation below: <xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform"> <xsl:output omit-xml-declaration="yes" indent="yes"/> <xsl:template match="@* | node()"> <xsl:copy> <xsl:apply-templates select="@* | node()"/> </xsl:copy> </xsl:template> <xsl:template match="/MyRoot"> <MyNewRoot xmlns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <xsl:apply-templates/> </MyNewRoot> </xsl:template> <xsl:template match="/MyRoot/MyRequest"> <xsl:element name="N1:{name()}" namespace="http://schemas.foo.com/request"> <xsl:copy-of select="namespace::*"/> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> <xsl:template match="/MyRoot/List/Order"> <xsl:element name="N2:{name()}" namespace="http://schemas.foo.com/details"> <xsl:copy-of select="namespace::*"/> <xsl:apply-templates select="@* | node()"/> </xsl:element> </xsl:template> </xsl:stylesheet> This one doesn't process the ns (I couldn't figure this out). When I process thru the above the XSL transformation with AltovaXML I end up with below: <MyNewRoot xmlns="http://schemas.foo.com/response" xmlns:N1="http://schemas.foo.com/request" xmlns:N2="http://schemas.foo.com/details"> <N1:MyRequest CompletionCode="0" CustomerID="9999999999"/> <List xmlns="" TotalList="1"> <N2:Order CustomerID="999999999" Level="Preferred" Status="Shipped"> <BillToAddress ZipCode="22221"/> <ShipToAddress ZipCode="22222"/> <Totals Tax="0.50" SubTotal="10.00" Shipping="4.95"/> </N2:Order> </List> <Errors/> </MyNewRoot> Note that N2: prefix for the children of Order is not there after the XSL transformation. Also additional xmlns="" in the Order header (for some reason). I couldn't figure out putting the ns: prefix for the rest of the elements (like Errors and List). First of all, why would I need to put the prefix for the children if the parent already has it. Doesn't the parent namespace dictate the children nodes/attribute namespaces? Secondly, I want to add the prefixes in the above XML as expected, how can I do that with XSL?

    Read the article

  • How to upload video on YouTube with Ruby

    - by viatropos
    I am trying to upload a youtube video using the GData gem (I have seen the youtube_g gem but would like to make it work with pure GData if possible), but I keep getting this error: GData::Client::BadRequestError in 'MyProject::Google::YouTube should upload the actual video to youtube (once it does, mock this test out)' request error 400: No file found in upload request. I am using this code: def metadata data = <<-EOF <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> EOF end @yt = GData::Client::YouTube.new @yt.clientlogin("name", "pass") @yt.developer_key = "myKey" url = "http://uploads.gdata.youtube.com/feeds/api/users/name/uploads" mime_type = "multipart/related" file_path = "sample_upload.mp4" @yt.post_file(url, file_path, mime_type, metadata) What is the recommended/standard way for uploading videos to youtube with ruby, what is your method? Update After applying the changes to wrapped_entry, the string it produces looks like this: --END_OF_PART_59003 Content-Type: application/atom+xml; charset=UTF-8 <?xml version="1.0"?> <entry xmlns="http://www.w3.org/2005/Atom" xmlns:media="http://search.yahoo.com/mrss/" xmlns:yt="http://gdata.youtube.com/schemas/2007"> <media:group> <media:title type="plain">Bad Wedding Toast</media:title> <media:description type="plain"> I gave a bad toast at my friend's wedding. </media:description> <media:category scheme="http://gdata.youtube.com/schemas/2007/categories.cat">People</media:category> <media:keywords>toast, wedding</media:keywords> </media:group> </entry> --END_OF_PART_59003 Content-Type: multipart/related Content-Transfer-Encoding: binary ... and inspecting the request and response looks like this: Request: <GData::HTTP::Request:0x1b8bb44 @method=:post @url="http://uploads.gdata.youtube.com/feeds/api/users/lancejpollard/uploads" @body=#<GData::HTTP::MimeBody:0x1b8c738 @parts=[#<GData::HTTP::MimeBodyString:0x1b8c058 @bytes_read=0 @string="--END_OF_PART_30909\r\nContent-Type: application/atom+xml; charset=UTF-8\r\n\r\n <?xml version=\"1.0\"?>\n<entry xmlns=\"http://www.w3.org/2005/Atom\"\n xmlns:media=\"http://search.yahoo.com/mrss/\"\n xmlns:yt=\"http://gdata.youtube.com/schemas/2007\">\n <media:group>\n <media:title type=\"plain\">Bad Wedding Toast</media:title>\n <media:description type=\"plain\">\n I gave a bad toast at my friend's wedding.\n </media:description>\n <media:category scheme=\"http://gdata.youtube.com/schemas/2007/categories.cat\">People</media:category>\n <media:keywords>toast wedding</media:keywords>\n </media:group>\n</entry> \n\r\n--END_OF_PART_30909\r\nContent-Type: multipart/related\r\nContent-Transfer-Encoding: binary\r\n\r\n"> #<File:/Users/Lance/Documents/Development/git/thing/spec/fixtures/sample_upload.mp4> #<GData::HTTP::MimeBodyString:0x1b8c044 @bytes_read=0 @string="\r\n--END_OF_PART_30909--"] @current_part=0 @boundary="END_OF_PART_30909" @headers={"Slug"="sample_upload.mp4" "User-Agent"="GoogleDataRubyUtil-AnonymousApp" "GData-Version"="2" "X-GData-Key"="key=AI39si7jkhs_ECjF4unOQz8gpWGSKXgq0KJpm8wywkvBSw4s8oJd5p5vkpvURHBNh-hiYJtoKwQqSfot7KoCkeCE32rNcZqMxA" "Content-Type"="multipart/related; boundary=\"END_OF_PART_30909\"" "MIME-Version"="1.0"} Response: #<GData::HTTP::Response:0x1b897e0 @body="No file found in upload request." @headers={"cache-control"=>"no-cache no-store must-revalidate" "connection"=>"close" "expires"=>"Fri 01 Jan 1990 00:00:00 GMT" "content-type"=>"text/plain; charset=utf-8" "date"=>"Fri 11 Dec 2009 02:10:25 GMT" "server"=>"Upload Server Built on Nov 30 2009 13:21:18 (1259616078)" "x-xss-protection"=>"0" "content-length"=>"32" "pragma"=>"no-cache"} @status_code=400> Still not working, I'll have to check it out more with those changes.

    Read the article

  • Spring 3.0: Handler mapping issue

    - by Yaniv Cohen
    I am having a trouble mapping a specific URL request to one of the controllers in my project. the URL is : http://HOSTNAME/api/v1/profiles.json the war which is deployed is: api.war the error I get is the following: [PageNotFound] No mapping found for HTTP request with URI [/api/v1/profiles.json] in DispatcherServlet with name 'action' The configuration I have is the following: web.xml : <context-param> <param-name>contextConfigLocation</param-name> <param-value>/WEB-INF/applicationContext.xml,/WEB-INF/applicationContext-security.xml</param-value> </context-param> <!-- Cache Control filter --> <filter> <filter-name>cacheControlFilter</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <!-- Cache Control filter mapping --> <filter-mapping> <filter-name>cacheControlFilter</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- Spring security filter --> <filter> <filter-name>springSecurityFilterChain</filter-name> <filter-class>org.springframework.web.filter.DelegatingFilterProxy</filter-class> </filter> <!-- Spring security filter mapping --> <filter-mapping> <filter-name>springSecurityFilterChain</filter-name> <url-pattern>/*</url-pattern> </filter-mapping> <!-- Spring listener --> <listener> <listener-class>org.springframework.web.context.ContextLoaderListener</listener-class> </listener> <!-- Spring Controller --> <servlet> <servlet-name>action</servlet-name> <servlet-class>org.springframework.web.servlet.DispatcherServlet</servlet-class> <load-on-startup>1</load-on-startup> </servlet> <servlet-mapping> <servlet-name>action</servlet-name> <url-pattern>/v1/*</url-pattern> </servlet-mapping> The action-servlet.xml: <mvc:annotation-driven/> <bean id="contentNegotiatingViewResolver" class="org.springframework.web.servlet.view.ContentNegotiatingViewResolver"> <property name="favorPathExtension" value="true" /> <property name="favorParameter" value="true" /> <!-- default media format parameter name is 'format' --> <property name="ignoreAcceptHeader" value="false" /> <property name="order" value="1" /> <property name="mediaTypes"> <map> <entry key="html" value="text/html"/> <entry key="json" value="application/json" /> <entry key="xml" value="application/xml" /> </map> </property> <property name="viewResolvers"> <list> <bean class="org.springframework.web.servlet.view.InternalResourceViewResolver"> <property name="prefix" value="/WEB-INF/jsp/"/> <property name="suffix" value=".jsp"/> <property name="viewClass" value="org.springframework.web.servlet.view.JstlView" /> </bean> </list> </property> <property name="defaultViews"> <list> <bean class="org.springframework.web.servlet.view.json.MappingJacksonJsonView" /> <bean class="org.springframework.web.servlet.view.xml.MarshallingView"> <constructor-arg> <bean class="org.springframework.oxm.xstream.XStreamMarshaller" /> </constructor-arg> </bean> </list> </property> </bean> the application context security: <sec:http auto-config='true' > <sec:intercept-url pattern="/login.*" filters="none"/> <sec:intercept-url pattern="/oauth/**" access="ROLE_USER" /> <sec:intercept-url pattern="/v1/**" access="ROLE_USER" /> <sec:intercept-url pattern="/request_token_authorized.jsp" access="ROLE_USER" /> <sec:intercept-url pattern="/**" access="ROLE_USER"/> <sec:form-login authentication-failure-url ="/login.html" default-target-url ="/login.html" login-page ="/login.html" login-processing-url ="/login.html" /> <sec:logout logout-success-url="/index.html" logout-url="/logout.html" /> </sec:http> the controller: @Controller public class ProfilesController { @RequestMapping(value = {"/v1/profiles"}, method = {RequestMethod.GET,RequestMethod.POST}) public void getProfilesList(@ModelAttribute("response") Response response) { .... } } the request never reaches this controller. Any ideas?

    Read the article

  • Curl_errno=55, "Failed sending network data."

    - by 4dplane
    Hi all: I have a php script that updates a database via an api. This script works on one server but not on another. Both servers have curl enabled and they have php 5.2.6 or above. The error happens in the do_put() method. The rest of the script seems to be fine. I have found that: curl_errno= 55 = "Failed sending network data". curl_error= select/poll returned error <?php //phpinfo(); require_once "class.DelveAuthUtil.php"; $access_key = ""; $secret = "W"; $org_id = ""; $media_id = ""; $new_tag_name = "uvideo"; $assign_tag_to_media_url = "http://api..com/rest/organizations/$org_id/media/$media_id/properties/tags/$new_tag_name"; $signed_create_new_tag_url = DAU::authenticate_request("PUT", $assign_tag_to_media_url, $access_key, $secret); # perform the creation of the new custom property $put_response = do_put($signed_create_new_tag_url); # Execute the POST function do_post($url, $params=array()) { # Combine parameters $param_string = ""; foreach ($params as $key => $value) { $value = urlencode($value); $param_string = $param_string . "&$key=$value"; } // Get the curl session object $session = curl_init($url); // Set the POST options. curl_setopt ($session, CURLOPT_POST, true); curl_setopt ($session, CURLOPT_POSTFIELDS, $param_string); curl_setopt($session, CURLOPT_HEADER, false); curl_setopt($session, CURLOPT_RETURNTRANSFER, true); // Do the POST and then close the session $response = curl_exec($session); curl_close($session); return $response; } # Execute the PUT function do_put($url) { // Get the curl session object $session = curl_init($url); print(" <br> session= " . $session . "<br>"); // Set the PUT options. print ("opt0= " . curl_setopt($session, CURLOPT_VERBOSE, TRUE) . "<br>"); print ("opt1= " . curl_setopt ($session, CURLOPT_PUT, true) . "<br>"); print ("opt2= " . curl_setopt ($session, CURLOPT_HEADER, false) . "<br>"); print ("opt3= " . curl_setopt ($session, CURLOPT_RETURNTRANSFER, true) . "<br>"); // Do the PUT and then close the session $response = curl_exec($session); if (curl_errno($session)) { print ( "curl_errno= " . curl_errno($session). "<br>"); print( "curl_error= " . curl_error($session) . "<br>"); } else { curl_close($session); } } I have small lead in the Apache log - there is an SSL issue that I do not know how to resolve. [Thu Mar 25 15:57:58 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:57:58 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations [Thu Mar 25 15:58:59 2010] [notice] caught SIGTERM, shutting down [Thu Mar 25 15:58:59 2010] [notice] suEXEC mechanism enabled (wrapper: /usr/sbin/suexec) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:58:59 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:58:59 2010] [warn] Init: SSL server IP/port conflict: my1.com:443 (/home/httpd/my.com/conf/kloxo.my1.com:69) vs. elggtest.my1.com:443 (/home/httpd/elggtest.my1.com/conf/kloxo.elggtest.my1.com:71) [Thu Mar 25 15:58:59 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:58:59 2010] [notice] Digest: generating secret for digest authentication ... [Thu Mar 25 15:58:59 2010] [notice] Digest: done [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate is a CA certificate (BasicConstraints: CA == TRUE !?) [Thu Mar 25 15:59:00 2010] [warn] RSA server certificate CommonName (CN) `yourvps.a2hosting.com' does NOT match server name!? [Thu Mar 25 15:59:00 2010] [warn] Init: SSL server IP/port conflict: my1.com:443 (/home/httpd/my1.com/conf/kloxo.my1.com:69) vs. elggtest.my1.com:443 (/home/httpd/elggtest.my1.com/conf/kloxo.elggtest.my1.com:71) [Thu Mar 25 15:59:00 2010] [warn] Init: You should not use name-based virtual hosts in conjunction with SSL!! [Thu Mar 25 15:59:00 2010] [notice] Apache/2.2.3 (CentOS) configured -- resuming normal operations[/PHP] Any help would be great! Thanks, 4dplane

    Read the article

  • The remote server returned an error: (400) Bad Request - uploading less 2MB file size?

    - by fiberOptics
    The file succeed to upload when it is 2KB or lower in size. The main reason why I use streaming is to be able to upload file up to at least 1 GB. But when I try to upload file with less 1MB size, I get bad request. It is my first time to deal with downloading and uploading process, so I can't easily find the cause of error. Testing part: private void button24_Click(object sender, EventArgs e) { try { OpenFileDialog openfile = new OpenFileDialog(); if (openfile.ShowDialog() == System.Windows.Forms.DialogResult.OK) { string port = "3445"; byte[] fileStream; using (FileStream fs = new FileStream(openfile.FileName, FileMode.Open, FileAccess.Read, FileShare.Read)) { fileStream = new byte[fs.Length]; fs.Read(fileStream, 0, (int)fs.Length); fs.Close(); fs.Dispose(); } string baseAddress = "http://localhost:" + port + "/File/AddStream?fileID=9"; HttpWebRequest request = (HttpWebRequest)HttpWebRequest.Create(baseAddress); request.Method = "POST"; request.ContentType = "text/plain"; //request.ContentType = "application/octet-stream"; Stream serverStream = request.GetRequestStream(); serverStream.Write(fileStream, 0, fileStream.Length); serverStream.Close(); using (HttpWebResponse response = request.GetResponse() as HttpWebResponse) { int statusCode = (int)response.StatusCode; StreamReader reader = new StreamReader(response.GetResponseStream()); } } } catch (Exception ex) { MessageBox.Show(ex.Message); } } Service: [WebInvoke(UriTemplate = "AddStream?fileID={fileID}", Method = "POST", BodyStyle = WebMessageBodyStyle.Bare)] public bool AddStream(long fileID, System.IO.Stream fileStream) { ClasslLogic.FileComponent svc = new ClasslLogic.FileComponent(); return svc.AddStream(fileID, fileStream); } Server code for streaming: namespace ClasslLogic { public class StreamObject : IStreamObject { public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, 0, bytearray.Length); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, bytearray.Length); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; } } } Web config: <system.serviceModel> <bindings> <basicHttpBinding> <binding> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="2097152" maxBytesPerRead="4096" maxNameTableCharCount="2097152" /> <security mode="None" /> </binding> <binding name="ClassLogicBasicTransfer" closeTimeout="00:05:00" openTimeout="00:05:00" receiveTimeout="00:15:00" sendTimeout="00:01:00" allowCookies="false" bypassProxyOnLocal="false" hostNameComparisonMode="StrongWildcard" maxBufferPoolSize="67108864" maxReceivedMessageSize="67108864" messageEncoding="Mtom" textEncoding="utf-8" useDefaultWebProxy="true"> <readerQuotas maxDepth="32" maxStringContentLength="8192" maxArrayLength="67108864" maxBytesPerRead="4096" maxNameTableCharCount="67108864" /> <security mode="None"> <transport clientCredentialType="None" proxyCredentialType="None" realm="" /> <message clientCredentialType="UserName" algorithmSuite="Default" /> </security> </binding> <binding name="BaseLogicWSHTTP"> <security mode="None" /> </binding> <binding name="BaseLogicWSHTTPSec" /> </basicHttpBinding> </bindings> <behaviors> <serviceBehaviors> <behavior> <!-- To avoid disclosing metadata information, set the value below to false and remove the metadata endpoint above before deployment --> <serviceMetadata httpGetEnabled="true" /> <!-- To receive exception details in faults for debugging purposes, set the value below to true. Set to false before deployment to avoid disclosing exception information --> <serviceDebug includeExceptionDetailInFaults="true" /> </behavior> </serviceBehaviors> </behaviors> <serviceHostingEnvironment multipleSiteBindingsEnabled="true" aspNetCompatibilityEnabled="true" /> </system.serviceModel> I'm not sure if this affects the streaming function, because I'm using WCF4.0 rest template which config is dependent in Global.asax. One more thing is this, whether I run the service and passing a stream or not, the created file always contain this thing. How could I remove the "NUL" data? Thanks in advance. Edit public bool UploadFile(string filename, Stream fileStream) { try { FileStream fileToupload = new FileStream(filename, FileMode.Create); byte[] bytearray = new byte[10000]; int bytesRead, totalBytesRead = 0; do { bytesRead = fileStream.Read(bytearray, totalBytesRead, bytearray.Length - totalBytesRead); totalBytesRead += bytesRead; } while (bytesRead > 0); fileToupload.Write(bytearray, 0, totalBytesRead); fileToupload.Close(); fileToupload.Dispose(); } catch (Exception ex) { throw new Exception(ex.Message); } return true; }

    Read the article

  • Asp.Net MVC and ajax async callback execution order

    - by lrb
    I have been sorting through this issue all day and hope someone can help pinpoint my problem. I have created a "asynchronous progress callback" type functionality in my app using ajax. When I strip the functionality out into a test application I get the desired results. See image below: Desired Functionality When I tie the functionality into my single page application using the same code I get a sort of blocking issue where all requests are responded to only after the last task has completed. In the test app above all request are responded to in order. The server reports a ("pending") state for all requests until the controller method has completed. Can anyone give me a hint as to what could cause the change in behavior? Not Desired Desired Fiddler Request/Response GET http://localhost:12028/task/status?_=1383333945335 HTTP/1.1 X-ProgressBar-TaskId: 892183768 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:12028/ Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:12028 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 3.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcVEVNUFxQcm9ncmVzc0Jhclx0YXNrXHN0YXR1cw==?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:39:08 GMT Content-Length: 25 Iteration completed... Not Desired Fiddler Request/Response GET http://localhost:60171/_Test/status?_=1383341766884 HTTP/1.1 X-ProgressBar-TaskId: 838217998 Accept: */* X-Requested-With: XMLHttpRequest Referer: http://localhost:60171/Report/Index Accept-Language: en-US Accept-Encoding: gzip, deflate User-Agent: Mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; Trident/6.0) Connection: Keep-Alive DNT: 1 Host: localhost:60171 Pragma: no-cache Cookie: ASP.NET_SessionId=rjli2jb0wyjrgxjqjsicdhdi; AspxAutoDetectCookieSupport=1; TTREPORTS_1_0=CC2A501EF499F9F...; __RequestVerificationToken=6klOoK6lSXR51zCVaDNhuaF6Blual0l8_JH1QTW9W6L-3LroNbyi6WvN6qiqv-PjqpCy7oEmNnAd9s0UONASmBQhUu8aechFYq7EXKzu7WSybObivq46djrE1lvkm6hNXgeLNLYmV0ORmGJeLWDyvA2 HTTP/1.1 200 OK Cache-Control: private Content-Type: text/html; charset=utf-8 Vary: Accept-Encoding Server: Microsoft-IIS/8.0 X-AspNetMvc-Version: 4.0 X-AspNet-Version: 4.0.30319 X-SourceFiles: =?UTF-8?B?QzpcUHJvamVjdHNcSUxlYXJuLlJlcG9ydHMuV2ViXHRydW5rXElMZWFybi5SZXBvcnRzLldlYlxfVGVzdFxzdGF0dXM=?= X-Powered-By: ASP.NET Date: Fri, 01 Nov 2013 21:37:48 GMT Content-Length: 25 Iteration completed... The only difference in the two requests headers besides the auth tokens is "Pragma: no-cache" in the request and the asp.net version in the response. Thanks Update - Code posted (I probably need to indicate this code originated from an article by Dino Esposito ) var ilProgressWorker = function () { var that = {}; that._xhr = null; that._taskId = 0; that._timerId = 0; that._progressUrl = ""; that._abortUrl = ""; that._interval = 500; that._userDefinedProgressCallback = null; that._taskCompletedCallback = null; that._taskAbortedCallback = null; that.createTaskId = function () { var _minNumber = 100, _maxNumber = 1000000000; return _minNumber + Math.floor(Math.random() * _maxNumber); }; // Set progress callback that.callback = function (userCallback, completedCallback, abortedCallback) { that._userDefinedProgressCallback = userCallback; that._taskCompletedCallback = completedCallback; that._taskAbortedCallback = abortedCallback; return this; }; // Set frequency of refresh that.setInterval = function (interval) { that._interval = interval; return this; }; // Abort the operation that.abort = function () { // if (_xhr !== null) // _xhr.abort(); if (that._abortUrl != null && that._abortUrl != "") { $.ajax({ url: that._abortUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId } }); } }; // INTERNAL FUNCTION that._internalProgressCallback = function () { that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); $.ajax({ url: that._progressUrl, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, success: function (status) { if (that._userDefinedProgressCallback != null) that._userDefinedProgressCallback(status); }, complete: function (data) { var i=0; }, }); }; // Invoke the URL and monitor its progress that.start = function (url, progressUrl, abortUrl) { that._taskId = that.createTaskId(); that._progressUrl = progressUrl; that._abortUrl = abortUrl; // Place the Ajax call _xhr = $.ajax({ url: url, cache: false, headers: { 'X-ProgressBar-TaskId': that._taskId }, complete: function () { if (_xhr.status != 0) return; if (that._taskAbortedCallback != null) that._taskAbortedCallback(); that.end(); }, success: function (data) { if (that._taskCompletedCallback != null) that._taskCompletedCallback(data); that.end(); } }); // Start the progress callback (if any) if (that._userDefinedProgressCallback == null || that._progressUrl === "") return this; that._timerId = window.setTimeout(that._internalProgressCallback, that._interval); }; // Finalize the task that.end = function () { that._taskId = 0; window.clearTimeout(that._timerId); } return that; };

    Read the article

  • I don't get prices with Amazon Product Advertising API

    - by Xarem
    I try to get prices of an ASIN number with the Amazon Product Advertising API. Code: $artNr = "B003TKSD8E"; $base_url = "http://ecs.amazonaws.de/onca/xml"; $params = array( 'AWSAccessKeyId' => self::API_KEY, 'AssociateTag' => self::API_ASSOCIATE_TAG, 'Version' => "2010-11-01", 'Operation' => "ItemLookup", 'Service' => "AWSECommerceService", 'Condition' => "All", 'IdType' => 'ASIN', 'ItemId' => $artNr); $params['Timestamp'] = gmdate("Y-m-d\TH:i:s.\\0\\0\\0\\Z", time()); $url_parts = array(); foreach(array_keys($params) as $key) $url_parts[] = $key . "=" . str_replace('%7E', '~', rawurlencode($params[$key])); sort($url_parts); $url_string = implode("&", $url_parts); $string_to_sign = "GET\necs.amazonaws.de\n/onca/xml\n" . $url_string; $signature = hash_hmac("sha256", $string_to_sign, self::API_SECRET, TRUE); $signature = urlencode(base64_encode($signature)); $url = $base_url . '?' . $url_string . "&Signature=" . $signature; $response = file_get_contents($url); $parsed_xml = simplexml_load_string($response); I think this should be correct - but I don't get offers in the response: SimpleXMLElement Object ( [OperationRequest] => SimpleXMLElement Object ( [RequestId] => ************************* [Arguments] => SimpleXMLElement Object ( [Argument] => Array ( [0] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Condition [Value] => All ) ) [1] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Operation [Value] => ItemLookup ) ) [2] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Service [Value] => AWSECommerceService ) ) [3] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => ItemId [Value] => B003TKSD8E ) ) [4] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => IdType [Value] => ASIN ) ) [5] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => AWSAccessKeyId [Value] => ************************* ) ) [6] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Timestamp [Value] => 2011-11-29T01:32:12.000Z ) ) [7] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Signature [Value] => ************************* ) ) [8] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => AssociateTag [Value] => ************************* ) ) [9] => SimpleXMLElement Object ( [@attributes] => Array ( [Name] => Version [Value] => 2010-11-01 ) ) ) ) [RequestProcessingTime] => 0.0091540000000000 ) [Items] => SimpleXMLElement Object ( [Request] => SimpleXMLElement Object ( [IsValid] => True [ItemLookupRequest] => SimpleXMLElement Object ( [Condition] => All [IdType] => ASIN [ItemId] => B003TKSD8E [ResponseGroup] => Small [VariationPage] => All ) ) [Item] => SimpleXMLElement Object ( [ASIN] => B003TKSD8E [DetailPageURL] => http://www.amazon.de/Apple-iPhone-4-32GB-schwarz/dp/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D165953%26creativeASIN%3DB003TKSD8E [ItemLinks] => SimpleXMLElement Object ( [ItemLink] => Array ( [0] => SimpleXMLElement Object ( [Description] => Add To Wishlist [URL] => http://www.amazon.de/gp/registry/wishlist/add-item.html%3Fasin.0%3DB003TKSD8E%26SubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) [1] => SimpleXMLElement Object ( [Description] => Tell A Friend [URL] => http://www.amazon.de/gp/pdp/taf/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) [2] => SimpleXMLElement Object ( [Description] => All Customer Reviews [URL] => http://www.amazon.de/review/product/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) [3] => SimpleXMLElement Object ( [Description] => All Offers [URL] => http://www.amazon.de/gp/offer-listing/B003TKSD8E%3FSubscriptionId%3DAKIAI6NFQHK2DQIPRUEQ%26tag%3Dbanholzerme-20%26linkCode%3Dxm2%26camp%3D2025%26creative%3D12738%26creativeASIN%3DB003TKSD8E ) ) ) [ItemAttributes] => SimpleXMLElement Object ( [Manufacturer] => Apple Computer [ProductGroup] => CE [Title] => Apple iPhone 4 32GB schwarz ) ) ) ) Can someone please explain me why I don't get any price-information? Thank you very much

    Read the article

  • pass an ID with hyperlik but cant get this ID value from a fk in one table when i click in insert

    - by susan
    Something strange happened in my codes, actually I have a hyperlink that pass ID value in a query string to second page.in second page i have 2 sql datasource that both these sql datasources should get this id value and pass it to a filter parameter to show sth in datalist. so in another word I have a first page that has an hyperlink read ID value from a datasource and pass it to second page.its like below: <asp:HyperLink ID="HyperLink1" runat="server" NavigateUrl='<%# "~/forumpage.aspx?ID="+Eval("ID")%>'><%#Eval("title")%> </asp:HyperLink> then in second page i have one sql datasource with a query like this ...where ID=@id and get this id in query string from db.it work great . but i have problem with second sql datasource in second page it has a query sth like below:...forms.question_id=@id then in sql reference both to query string as ID that get by first page in hyperlink. but when i click in insert button show me error with fk. error:Error:The INSERT statement conflicted with the FOREIGN KEY constraint "FK_forumreply_forumquestions". The conflict occurred in database "forum", table "dbo.forumquestions", column 'ID'. The statement has been terminated. my tables (question(ID,user_id(fk),Cat_id(fk),title,bodytext) (reply(ID,userr_id(fk),questionn_id(fk),titlereply,bodytestreply); When by hand in cb i gave a number in questionn_id like 1 it show me successful but when it want read from a filter by datasource this field face with problem. plzzzz help i really need skip from this part.and cause i am new i guess I cant understand the logic way clearly. <asp:SqlDataSource ID="sdsreply" runat="server" ConnectionString="<%$ ConnectionStrings:forumConnectionString %>" SelectCommand="SELECT forumreply.ID, forumreply.userr_id, forumreply.questionn_id, forumreply.bodytextreply, forumreply.datetimereply, forumquestions.ID AS Expr1, forumusers.ID AS Expr2, forumusers.username FROM forumquestions INNER JOIN forumreply ON forumquestions.ID = forumreply.questionn_id INNER JOIN forumusers ON forumquestions.user_id = forumusers.ID AND forumreply.userr_id = forumusers.ID where forumreply.questionn_id=@questionn_id"> <SelectParameters> <asp:QueryStringParameter Name="questionn_id" QueryStringField="ID" /> </SelectParameters> </asp:SqlDataSource> it is cb for second page in insert button: { if (Session["userid"] != null) { lblreply.Text = Session["userid"].ToString(); } else { Session["userid"]=null; } if (HttpContext.Current.User.Identity.IsAuthenticated) { lblshow.Text = string.Empty; string d = HttpContext.Current.User.Identity.Name; lblshow.Text =d + "???? ??? ?????." ; foreach (DataListItem item in DataList2.Items) { Label questionn_idLabel = (Label)item.FindControl("questionn_idLabel"); Label userr_idLabel = (Label)item.FindControl("userr_idLabel"); lbltest.Text = string.Empty; lbltest.Text = questionn_idLabel.Text; lblreply.Text = string.Empty; lblreply.Text = userr_idLabel.Text; } } else { lblshow.Text = "??? ??? ??? ??? ?? ?? ?????? ???? ???? ???? ????? ??? ??? ? ??? ????? ???????."; } } { if(HttpContext.Current.User.Identity.IsAuthenticated) { if (Page.IsValid) { SqlConnection con = new SqlConnection(ConfigurationManager.ConnectionStrings["forumConnectionString"].ConnectionString); try { con.Open(); SqlCommand cmd = new SqlCommand("insert into forumreply (userr_id,questionn_id,bodytextreply,datetimereply)values(@userr_id,@questionn_id,@bodytextreply,@datetimereply)", con); cmd.Parameters.AddWithValue("userr_id",lblreply.Text); cmd.Parameters.AddWithValue("questionn_id",lbltest.Text); cmd.Parameters.AddWithValue("bodytextreply",txtbody.Text); cmd.Parameters.AddWithValue("datetimereply",DateTime.Now ); cmd.ExecuteNonQuery(); } catch (Exception exp) { Response.Write("<b>Error:</b>"); Response.Write(exp.Message); } finally { con.Close(); } lblmsg.Text = "???? ??? ?? ?????? ??? ?????.thx"; lblshow.Visible = false; //lbltxt.Text = txtbody.Text; txtbody.Text = string.Empty; } } else { lblmsg.Text = string.Empty; Session["rem"] = Request.UrlReferrer.AbsoluteUri; Response.Redirect("~/login.aspx"); } }

    Read the article

  • SOLR date faceting and BC / BCE dates / negative date ranges

    - by Nigel_V_Thomas
    Date ranges including BC dates is this possible? I would like to return facets for all years between 11000 BCE (BC) and 9000 BCE (BC) using SOLR. A sample query might be with date ranges converted to ISO 8601: q=*:*&facet.date=myfield_earliestDate&facet.date.end=-92009-01-01T00:00:00&facet.date.gap=%2B1000YEAR&facet.date.other=all&facet=on&f.myfield_earliestDate.facet.date.start=-112009-01-01T00:00:00 However the returned results seem to be suggest that dates are in positive range, ie CE, not BCE... see sample returned results <response> <lst name="responseHeader"> <int name="status">0</int> <int name="QTime">6</int> <lst name="params"> <str name="f.vra.work.creation.earliestDate.facet.date.start">-112009-01-01T00:00:00Z</str> <str name="facet">on</str> <str name="q">*:*</str> <str name="facet.date">vra.work.creation.earliestDate</str> <str name="facet.date.gap">+1000YEAR</str> <str name="facet.date.other">all</str> <str name="facet.date.end">-92009-01-01T00:00:00Z</str> </lst> </lst> <result name="response" numFound="9556" start="0">ommitted</result> <lst name="facet_counts"> <lst name="facet_queries"/> <lst name="facet_fields"/> <lst name="facet_dates"> <lst name="vra.work.creation.earliestDate"> <int name="112010-01-01T00:00:00Z">0</int> <int name="111010-01-01T00:00:00Z">0</int> <int name="110010-01-01T00:00:00Z">0</int> <int name="109010-01-01T00:00:00Z">0</int> <int name="108010-01-01T00:00:00Z">0</int> <int name="107010-01-01T00:00:00Z">0</int> <int name="106010-01-01T00:00:00Z">0</int> <int name="105010-01-01T00:00:00Z">0</int> <int name="104010-01-01T00:00:00Z">0</int> <int name="103010-01-01T00:00:00Z">0</int> <int name="102010-01-01T00:00:00Z">0</int> <int name="101010-01-01T00:00:00Z">0</int> <int name="100010-01-01T00:00:00Z">5781</int> <int name="99010-01-01T00:00:00Z">0</int> <int name="98010-01-01T00:00:00Z">0</int> <int name="97010-01-01T00:00:00Z">0</int> <int name="96010-01-01T00:00:00Z">0</int> <int name="95010-01-01T00:00:00Z">0</int> <int name="94010-01-01T00:00:00Z">0</int> <int name="93010-01-01T00:00:00Z">0</int> <str name="gap">+1000YEAR</str> <date name="end">92010-01-01T00:00:00Z</date> <int name="before">224</int> <int name="after">0</int> <int name="between">5690</int> </lst> </lst> </lst> </response> Any ideas why this is the case, can solr handle negative dates such as -112009-01-01T00:00:00Z?

    Read the article

  • Problem with Mootools Ajax request and submitting a form

    - by Arwed
    Hello. This is my problem: I have a table with content comming from a database. Now i tryed to realize a way to (a) delete rows from the table (b) edit the content of the row "on the fly". (a) is working perfectly (b) makes my head smoking! Here is the complete Mootools Code: <script type="text/javascript"> window.addEvent('domready', function() { $('edit_hide').slide('hide'); var saf = new Request.HTML( { url: 'termin_safe.php', encoding: 'utf-8', onComplete: function(response) { document.location.href=''; } }); var req = new Request.HTML( { url: 'fuss_response.php', encoding: 'utf-8', onComplete: function(response) { document.location.href=''; } }); var eDit = $('edit_hide'); var eD = new Request.HTML( { url: 'fuss_response_edit.php', update: eDit, encoding: 'utf-8', onComplete: function(response) { $('sst').addEvent( 'click', function(e){ e.stop(); saf.send(); }); } }); $$('input.edit').addEvent( 'click', function(e){ e.stop(); var aID = 'edit_', bID = '', cID = 'ed_'; var deleteID = this.getProperty('id').replace(aID,bID); var editID = $(this.getProperty('id').replace(aID,cID)); eD.send({data : "id=" + deleteID}); $('edit_hide').slide('toggle'); }); $$('input.delete').addEvent( 'click', function(e){ e.stop(); var aID = 'delete_', bID = ''; var deleteID = this.getProperty('id').replace(aID,bID); new MooDialog.Confirm('Soll der Termin gelöscht werden?', function(){ req.send({data : "id=" + deleteID}); }, function(){ new MooDialog.Alert('Schon Konfuzius hat gesagt: Erst denken dann handeln!'); }); }); }); </script> Here the PHP Part that makes the Edit Form: <?php $cKey = mysql_real_escape_string($_POST['id']); $request = mysql_query("SELECT * FROM fusspflege WHERE ID = '".$cKey."'"); while ($row = mysql_fetch_object($request)) { $id = $row->ID; $name = $row->name; $vor = $row->vorname; $ort = $row->ort; $tel = $row->telefon; $mail = $row->email; } echo '<form id="termin_edit" method="post" action="">'; echo '<div><label>Name:</label><input type="text" id="nns" name="name" value="'.$name.'"></div>'; echo '<div><label>Vorname:</label><input type="text" id="nvs" name="vorname" value="'.$vor.'"></div>'; echo '<div><label>Ort:</label><input type="text" id="nos" name="ort" value="'.$ort.'"></div>'; echo '<div><label>Telefon:</label><input type="text" id="nts" name="telefon" value="'.$tel.'"></div>'; echo '<div><label>eMail:</label><input type="text" id="nms" name="email" value="'.$mail.'"></div>'; echo '<input name="id" type="hidden" id="ids" value="'.$id.'"/>'; echo '<input type="button" id="sst" value="Speichern">'; echo '</form>'; ?> And last the Code of the termin_safe.php $id = mysql_real_escape_string($_POST['id']); $na = mysql_real_escape_string($_POST['name']); $vn = mysql_real_escape_string($_POST['vorname']); $ort = mysql_real_escape_string($_POST['ort']); $tel = mysql_real_escape_string($_POST['telefon']); $em = mysql_real_escape_string($_POST['email']); $score = mysql_query("UPDATE fuspflege SET name = '".$na."', vorname = '".$vn."', ort = '".$ort."', telefon = '".$tel."', email = '".$em."' WHERE ID = '".$id."'"); As far as i can see the request does work but the data is not updated! i guess somethings wrong with the things posted For any suggestions i will be gladly happy!

    Read the article

  • Is there a Telecommunications Reference Architecture?

    - by raul.goycoolea
    @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Abstract   Reference architecture provides needed architectural information that can be provided in advance to an enterprise to enable consistent architectural best practices. Enterprise Reference Architecture helps business owners to actualize their strategies, vision, objectives, and principles. It evaluates the IT systems, based on Reference Architecture goals, principles, and standards. It helps to reduce IT costs by increasing functionality, availability, scalability, etc. Telecom Reference Architecture provides customers with the flexibility to view bundled service bills online with the provision of multiple services. It provides real-time, flexible billing and charging systems, to handle complex promotions, discounts, and settlements with multiple parties. This paper attempts to describe the Reference Architecture for the Telecom Enterprises. It lays the foundation for a Telecom Reference Architecture by articulating the requirements, drivers, and pitfalls for telecom service providers. It describes generic reference architecture for telecom enterprises and moves on to explain how to achieve Enterprise Reference Architecture by using SOA.   Introduction   A Reference Architecture provides a methodology, set of practices, template, and standards based on a set of successful solutions implemented earlier. These solutions have been generalized and structured for the depiction of both a logical and a physical architecture, based on the harvesting of a set of patterns that describe observations in a number of successful implementations. It helps as a reference for the various architectures that an enterprise can implement to solve various problems. It can be used as the starting point or the point of comparisons for various departments/business entities of a company, or for the various companies for an enterprise. It provides multiple views for multiple stakeholders.   Major artifacts of the Enterprise Reference Architecture are methodologies, standards, metadata, documents, design patterns, etc.   Purpose of Reference Architecture   In most cases, architects spend a lot of time researching, investigating, defining, and re-arguing architectural decisions. It is like reinventing the wheel as their peers in other organizations or even the same organization have already spent a lot of time and effort defining their own architectural practices. This prevents an organization from learning from its own experiences and applying that knowledge for increased effectiveness.   Reference architecture provides missing architectural information that can be provided in advance to project team members to enable consistent architectural best practices.   Enterprise Reference Architecture helps an enterprise to achieve the following at the abstract level:   ·       Reference architecture is more of a communication channel to an enterprise ·       Helps the business owners to accommodate to their strategies, vision, objectives, and principles. ·       Evaluates the IT systems based on Reference Architecture Principles ·       Reduces IT spending through increasing functionality, availability, scalability, etc ·       A Real-time Integration Model helps to reduce the latency of the data updates Is used to define a single source of Information ·       Provides a clear view on how to manage information and security ·       Defines the policy around the data ownership, product boundaries, etc. ·       Helps with cost optimization across project and solution portfolios by eliminating unused or duplicate investments and assets ·       Has a shorter implementation time and cost   Once the reference architecture is in place, the set of architectural principles, standards, reference models, and best practices ensure that the aligned investments have the greatest possible likelihood of success in both the near term and the long term (TCO).     Common pitfalls for Telecom Service Providers   Telecom Reference Architecture serves as the first step towards maturity for a telecom service provider. During the course of our assignments/experiences with telecom players, we have come across the following observations – Some of these indicate a lack of maturity of the telecom service provider:   ·       In markets that are growing and not so mature, it has been observed that telcos have a significant amount of in-house or home-grown applications. In some of these markets, the growth has been so rapid that IT has been unable to cope with business demands. Telcos have shown a tendency to come up with workarounds in their IT applications so as to meet business needs. ·       Even for core functions like provisioning or mediation, some telcos have tried to manage with home-grown applications. ·       Most of the applications do not have the required scalability or maintainability to sustain growth in volumes or functionality. ·       Applications face interoperability issues with other applications in the operator's landscape. Integrating a new application or network element requires considerable effort on the part of the other applications. ·       Application boundaries are not clear, and functionality that is not in the initial scope of that application gets pushed onto it. This results in the development of the multiple, small applications without proper boundaries. ·       Usage of Legacy OSS/BSS systems, poor Integration across Multiple COTS Products and Internal Systems. Most of the Integrations are developed on ad-hoc basis and Point-to-Point Integration. ·       Redundancy of the business functions in different applications • Fragmented data across the different applications and no integrated view of the strategic data • Lot of performance Issues due to the usage of the complex integration across OSS and BSS systems   However, this is where the maturity of the telecom industry as a whole can be of help. The collaborative efforts of telcos to overcome some of these problems have resulted in bodies like the TM Forum. They have come up with frameworks for business processes, data, applications, and technology for telecom service providers. These could be a good starting point for telcos to clean up their enterprise landscape.   Industry Trends in Telecom Reference Architecture   Telecom reference architectures are evolving rapidly because telcos are facing business and IT challenges.   “The reality is that there probably is no killer application, no silver bullet that the telcos can latch onto to carry them into a 21st Century.... Instead, there are probably hundreds – perhaps thousands – of niche applications.... And the only way to find which of these works for you is to try out lots of them, ramp up the ones that work, and discontinue the ones that fail.” – Martin Creaner President & CTO TM Forum.   The following trends have been observed in telecom reference architecture:   ·       Transformation of business structures to align with customer requirements ·       Adoption of more Internet-like technical architectures. The Web 2.0 concept is increasingly being used. ·       Virtualization of the traditional operations support system (OSS) ·       Adoption of SOA to support development of IP-based services ·       Adoption of frameworks like Service Delivery Platforms (SDPs) and IP Multimedia Subsystem ·       (IMS) to enable seamless deployment of various services over fixed and mobile networks ·       Replacement of in-house, customized, and stove-piped OSS/BSS with standards-based COTS products ·       Compliance with industry standards and frameworks like eTOM, SID, and TAM to enable seamless integration with other standards-based products   Drivers of Reference Architecture   The drivers of the Reference Architecture are Reference Architecture Goals, Principles, and Enterprise Vision and Telecom Transformation. The details are depicted below diagram. @font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }div.Section1 { page: Section1; } Figure 1. Drivers for Reference Architecture @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Today’s telecom reference architectures should seamlessly integrate traditional legacy-based applications and transition to next-generation network technologies (e.g., IP multimedia subsystems). This has resulted in new requirements for flexible, real-time billing and OSS/BSS systems and implications on the service provider’s organizational requirements and structure.   Telecom reference architectures are today expected to:   ·       Integrate voice, messaging, email and other VAS over fixed and mobile networks, back end systems ·       Be able to provision multiple services and service bundles • Deliver converged voice, video and data services ·       Leverage the existing Network Infrastructure ·       Provide real-time, flexible billing and charging systems to handle complex promotions, discounts, and settlements with multiple parties. ·       Support charging of advanced data services such as VoIP, On-Demand, Services (e.g.  Video), IMS/SIP Services, Mobile Money, Content Services and IPTV. ·       Help in faster deployment of new services • Serve as an effective platform for collaboration between network IT and business organizations ·       Harness the potential of converging technology, networks, devices and content to develop multimedia services and solutions of ever-increasing sophistication on a single Internet Protocol (IP) ·       Ensure better service delivery and zero revenue leakage through real-time balance and credit management ·       Lower operating costs to drive profitability   Enterprise Reference Architecture   The Enterprise Reference Architecture (RA) fills the gap between the concepts and vocabulary defined by the reference model and the implementation. Reference architecture provides detailed architectural information in a common format such that solutions can be repeatedly designed and deployed in a consistent, high-quality, supportable fashion. This paper attempts to describe the Reference Architecture for the Telecom Application Usage and how to achieve the Enterprise Level Reference Architecture using SOA.   • Telecom Reference Architecture • Enterprise SOA based Reference Architecture   Telecom Reference Architecture   Tele Management Forum’s New Generation Operations Systems and Software (NGOSS) is an architectural framework for organizing, integrating, and implementing telecom systems. NGOSS is a component-based framework consisting of the following elements:   ·       The enhanced Telecom Operations Map (eTOM) is a business process framework. ·       The Shared Information Data (SID) model provides a comprehensive information framework that may be specialized for the needs of a particular organization. ·       The Telecom Application Map (TAM) is an application framework to depict the functional footprint of applications, relative to the horizontal processes within eTOM. ·       The Technology Neutral Architecture (TNA) is an integrated framework. TNA is an architecture that is sustainable through technology changes.   NGOSS Architecture Standards are:   ·       Centralized data ·       Loosely coupled distributed systems ·       Application components/re-use  ·       A technology-neutral system framework with technology specific implementations ·       Interoperability to service provider data/processes ·       Allows more re-use of business components across multiple business scenarios ·       Workflow automation   The traditional operator systems architecture consists of four layers,   ·       Business Support System (BSS) layer, with focus toward customers and business partners. Manages order, subscriber, pricing, rating, and billing information. ·       Operations Support System (OSS) layer, built around product, service, and resource inventories. ·       Networks layer – consists of Network elements and 3rd Party Systems. ·       Integration Layer – to maximize application communication and overall solution flexibility.   Reference architecture for telecom enterprises is depicted below. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 2. Telecom Reference Architecture   The major building blocks of any Telecom Service Provider architecture are as follows:   1. Customer Relationship Management   CRM encompasses the end-to-end lifecycle of the customer: customer initiation/acquisition, sales, ordering, and service activation, customer care and support, proactive campaigns, cross sell/up sell, and retention/loyalty.   CRM also includes the collection of customer information and its application to personalize, customize, and integrate delivery of service to a customer, as well as to identify opportunities for increasing the value of the customer to the enterprise.   The key functionalities related to Customer Relationship Management are   ·       Manage the end-to-end lifecycle of a customer request for products. ·       Create and manage customer profiles. ·       Manage all interactions with customers – inquiries, requests, and responses. ·       Provide updates to Billing and other south bound systems on customer/account related updates such as customer/ account creation, deletion, modification, request bills, final bill, duplicate bills, credit limits through Middleware. ·       Work with Order Management System, Product, and Service Management components within CRM. ·       Manage customer preferences – Involve all the touch points and channels to the customer, including contact center, retail stores, dealers, self service, and field service, as well as via any media (phone, face to face, web, mobile device, chat, email, SMS, mail, the customer's bill, etc.). ·       Support single interface for customer contact details, preferences, account details, offers, customer premise equipment, bill details, bill cycle details, and customer interactions.   CRM applications interact with customers through customer touch points like portals, point-of-sale terminals, interactive voice response systems, etc. The requests by customers are sent via fulfillment/provisioning to billing system for ordering processing.   2. Billing and Revenue Management   Billing and Revenue Management handles the collection of appropriate usage records and production of timely and accurate bills – for providing pre-bill usage information and billing to customers; for processing their payments; and for performing payment collections. In addition, it handles customer inquiries about bills, provides billing inquiry status, and is responsible for resolving billing problems to the customer's satisfaction in a timely manner. This process grouping also supports prepayment for services.   The key functionalities provided by these applications are   ·       To ensure that enterprise revenue is billed and invoices delivered appropriately to customers. ·       To manage customers’ billing accounts, process their payments, perform payment collections, and monitor the status of the account balance. ·       To ensure the timely and effective fulfillment of all customer bill inquiries and complaints. ·       Collect the usage records from mediation and ensure appropriate rating and discounting of all usage and pricing. ·       Support revenue sharing; split charging where usage is guided to an account different from the service consumer. ·       Support prepaid and post-paid rating. ·       Send notification on approach / exceeding the usage thresholds as enforced by the subscribed offer, and / or as setup by the customer. ·       Support prepaid, post paid, and hybrid (where some services are prepaid and the rest of the services post paid) customers and conversion from post paid to prepaid, and vice versa. ·       Support different billing function requirements like charge prorating, promotion, discount, adjustment, waiver, write-off, account receivable, GL Interface, late payment fee, credit control, dunning, account or service suspension, re-activation, expiry, termination, contract violation penalty, etc. ·       Initiate direct debit to collect payment against an invoice outstanding. ·       Send notification to Middleware on different events; for example, payment receipt, pre-suspension, threshold exceed, etc.   Billing systems typically get usage data from mediation systems for rating and billing. They get provisioning requests from order management systems and inquiries from CRM systems. Convergent and real-time billing systems can directly get usage details from network elements.   3. Mediation   Mediation systems transform/translate the Raw or Native Usage Data Records into a general format that is acceptable to billing for their rating purposes.   The following lists the high-level roles and responsibilities executed by the Mediation system in the end-to-end solution.   ·       Collect Usage Data Records from different data sources – like network elements, routers, servers – via different protocol and interfaces. ·       Process Usage Data Records – Mediation will process Usage Data Records as per the source format. ·       Validate Usage Data Records from each source. ·       Segregates Usage Data Records coming from each source to multiple, based on the segregation requirement of end Application. ·       Aggregates Usage Data Records based on the aggregation rule if any from different sources. ·       Consolidates multiple Usage Data Records from each source. ·       Delivers formatted Usage Data Records to different end application like Billing, Interconnect, Fraud Management, etc. ·       Generates audit trail for incoming Usage Data Records and keeps track of all the Usage Data Records at various stages of mediation process. ·       Checks duplicate Usage Data Records across files for a given time window.   4. Fulfillment   This area is responsible for providing customers with their requested products in a timely and correct manner. It translates the customer's business or personal need into a solution that can be delivered using the specific products in the enterprise's portfolio. This process informs the customers of the status of their purchase order, and ensures completion on time, as well as ensuring a delighted customer. These processes are responsible for accepting and issuing orders. They deal with pre-order feasibility determination, credit authorization, order issuance, order status and tracking, customer update on customer order activities, and customer notification on order completion. Order management and provisioning applications fall into this category.   The key functionalities provided by these applications are   ·       Issuing new customer orders, modifying open customer orders, or canceling open customer orders; ·       Verifying whether specific non-standard offerings sought by customers are feasible and supportable; ·       Checking the credit worthiness of customers as part of the customer order process; ·       Testing the completed offering to ensure it is working correctly; ·       Updating of the Customer Inventory Database to reflect that the specific product offering has been allocated, modified, or cancelled; ·       Assigning and tracking customer provisioning activities; ·       Managing customer provisioning jeopardy conditions; and ·       Reporting progress on customer orders and other processes to customer.   These applications typically get orders from CRM systems. They interact with network elements and billing systems for fulfillment of orders.   5. Enterprise Management   This process area includes those processes that manage enterprise-wide activities and needs, or have application within the enterprise as a whole. They encompass all business management processes that   ·       Are necessary to support the whole of the enterprise, including processes for financial management, legal management, regulatory management, process, cost, and quality management, etc.;   ·       Are responsible for setting corporate policies, strategies, and directions, and for providing guidelines and targets for the whole of the business, including strategy development and planning for areas, such as Enterprise Architecture, that are integral to the direction and development of the business;   ·       Occur throughout the enterprise, including processes for project management, performance assessments, cost assessments, etc.     (i) Enterprise Risk Management:   Enterprise Risk Management focuses on assuring that risks and threats to the enterprise value and/or reputation are identified, and appropriate controls are in place to minimize or eliminate the identified risks. The identified risks may be physical or logical/virtual. Successful risk management ensures that the enterprise can support its mission critical operations, processes, applications, and communications in the face of serious incidents such as security threats/violations and fraud attempts. Two key areas covered in Risk Management by telecom operators are:   ·       Revenue Assurance: Revenue assurance system will be responsible for identifying revenue loss scenarios across components/systems, and will help in rectifying the problems. The following lists the high-level roles and responsibilities executed by the Revenue Assurance system in the end-to-end solution. o   Identify all usage information dropped when networks are being upgraded. o   Interconnect bill verification. o   Identify where services are routinely provisioned but never billed. o   Identify poor sales policies that are intensifying collections problems. o   Find leakage where usage is sent to error bucket and never billed for. o   Find leakage where field service, CRM, and network build-out are not optimized.   ·       Fraud Management: Involves collecting data from different systems to identify abnormalities in traffic patterns, usage patterns, and subscription patterns to report suspicious activity that might suggest fraudulent usage of resources, resulting in revenue losses to the operator.   The key roles and responsibilities of the system component are as follows:   o   Fraud management system will capture and monitor high usage (over a certain threshold) in terms of duration, value, and number of calls for each subscriber. The threshold for each subscriber is decided by the system and fixed automatically. o   Fraud management will be able to detect the unauthorized access to services for certain subscribers. These subscribers may have been provided unauthorized services by employees. The component will raise the alert to the operator the very first time of such illegal calls or calls which are not billed. o   The solution will be to have an alarm management system that will deliver alarms to the operator/provider whenever it detects a fraud, thus minimizing fraud by catching it the first time it occurs. o   The Fraud Management system will be capable of interfacing with switches, mediation systems, and billing systems   (ii) Knowledge Management   This process focuses on knowledge management, technology research within the enterprise, and the evaluation of potential technology acquisitions.   Key responsibilities of knowledge base management are to   ·       Maintain knowledge base – Creation and updating of knowledge base on ongoing basis. ·       Search knowledge base – Search of knowledge base on keywords or category browse ·       Maintain metadata – Management of metadata on knowledge base to ensure effective management and search. ·       Run report generator. ·       Provide content – Add content to the knowledge base, e.g., user guides, operational manual, etc.   (iii) Document Management   It focuses on maintaining a repository of all electronic documents or images of paper documents relevant to the enterprise using a system.   (iv) Data Management   It manages data as a valuable resource for any enterprise. For telecom enterprises, the typical areas covered are Master Data Management, Data Warehousing, and Business Intelligence. It is also responsible for data governance, security, quality, and database management.   Key responsibilities of Data Management are   ·       Using ETL, extract the data from CRM, Billing, web content, ERP, campaign management, financial, network operations, asset management info, customer contact data, customer measures, benchmarks, process data, e.g., process inputs, outputs, and measures, into Enterprise Data Warehouse. ·       Management of data traceability with source, data related business rules/decisions, data quality, data cleansing data reconciliation, competitors data – storage for all the enterprise data (customer profiles, products, offers, revenues, etc.) ·       Get online update through night time replication or physical backup process at regular frequency. ·       Provide the data access to business intelligence and other systems for their analysis, report generation, and use.   (v) Business Intelligence   It uses the Enterprise Data to provide the various analysis and reports that contain prospects and analytics for customer retention, acquisition of new customers due to the offers, and SLAs. It will generate right and optimized plans – bolt-ons for the customers.   The following lists the high-level roles and responsibilities executed by the Business Intelligence system at the Enterprise Level:   ·       It will do Pattern analysis and reports problem. ·       It will do Data Analysis – Statistical analysis, data profiling, affinity analysis of data, customer segment wise usage patterns on offers, products, service and revenue generation against services and customer segments. ·       It will do Performance (business, system, and forecast) analysis, churn propensity, response time, and SLAs analysis. ·       It will support for online and offline analysis, and report drill down capability. ·       It will collect, store, and report various SLA data. ·       It will provide the necessary intelligence for marketing and working on campaigns, etc., with cost benefit analysis and predictions.   It will advise on customer promotions with additional services based on loyalty and credit history of customer   ·       It will Interface with Enterprise Data Management system for data to run reports and analysis tasks. It will interface with the campaign schedules, based on historical success evidence.   (vi) Stakeholder and External Relations Management   It manages the enterprise's relationship with stakeholders and outside entities. Stakeholders include shareholders, employee organizations, etc. Outside entities include regulators, local community, and unions. Some of the processes within this grouping are Shareholder Relations, External Affairs, Labor Relations, and Public Relations.   (vii) Enterprise Resource Planning   It is used to manage internal and external resources, including tangible assets, financial resources, materials, and human resources. Its purpose is to facilitate the flow of information between all business functions inside the boundaries of the enterprise and manage the connections to outside stakeholders. ERP systems consolidate all business operations into a uniform and enterprise wide system environment.   The key roles and responsibilities for Enterprise System are given below:   ·        It will handle responsibilities such as core accounting, financial, and management reporting. ·       It will interface with CRM for capturing customer account and details. ·       It will interface with billing to capture the billing revenue and other financial data. ·       It will be responsible for executing the dunning process. Billing will send the required feed to ERP for execution of dunning. ·       It will interface with the CRM and Billing through batch interfaces. Enterprise management systems are like horizontals in the enterprise and typically interact with all major telecom systems. E.g., an ERP system interacts with CRM, Fulfillment, and Billing systems for different kinds of data exchanges.   6. External Interfaces/Touch Points   The typical external parties are customers, suppliers/partners, employees, shareholders, and other stakeholders. External interactions from/to a Service Provider to other parties can be achieved by a variety of mechanisms, including:   ·       Exchange of emails or faxes ·       Call Centers ·       Web Portals ·       Business-to-Business (B2B) automated transactions   These applications provide an Internet technology driven interface to external parties to undertake a variety of business functions directly for themselves. These can provide fully or partially automated service to external parties through various touch points.   Typical characteristics of these touch points are   ·       Pre-integrated self-service system, including stand-alone web framework or integration front end with a portal engine ·       Self services layer exposing atomic web services/APIs for reuse by multiple systems across the architectural environment ·       Portlets driven connectivity exposing data and services interoperability through a portal engine or web application   These touch points mostly interact with the CRM systems for requests, inquiries, and responses.   7. Middleware   The component will be primarily responsible for integrating the different systems components under a common platform. It should provide a Standards-Based Platform for building Service Oriented Architecture and Composite Applications. The following lists the high-level roles and responsibilities executed by the Middleware component in the end-to-end solution.   ·       As an integration framework, covering to and fro interfaces ·       Provide a web service framework with service registry. ·       Support SOA framework with SOA service registry. ·       Each of the interfaces from / to Middleware to other components would handle data transformation, translation, and mapping of data points. ·       Receive data from the caller / activate and/or forward the data to the recipient system in XML format. ·       Use standard XML for data exchange. ·       Provide the response back to the service/call initiator. ·       Provide a tracking until the response completion. ·       Keep a store transitional data against each call/transaction. ·       Interface through Middleware to get any information that is possible and allowed from the existing systems to enterprise systems; e.g., customer profile and customer history, etc. ·       Provide the data in a common unified format to the SOA calls across systems, and follow the Enterprise Architecture directive. ·       Provide an audit trail for all transactions being handled by the component.   8. Network Elements   The term Network Element means a facility or equipment used in the provision of a telecommunications service. Such terms also includes features, functions, and capabilities that are provided by means of such facility or equipment, including subscriber numbers, databases, signaling systems, and information sufficient for billing and collection or used in the transmission, routing, or other provision of a telecommunications service.   Typical network elements in a GSM network are Home Location Register (HLR), Intelligent Network (IN), Mobile Switching Center (MSC), SMS Center (SMSC), and network elements for other value added services like Push-to-talk (PTT), Ring Back Tone (RBT), etc.   Network elements are invoked when subscribers use their telecom devices for any kind of usage. These elements generate usage data and pass it on to downstream systems like mediation and billing system for rating and billing. They also integrate with provisioning systems for order/service fulfillment.   9. 3rd Party Applications   3rd Party systems are applications like content providers, payment gateways, point of sale terminals, and databases/applications maintained by the Government.   Depending on applicability and the type of functionality provided by 3rd party applications, the integration with different telecom systems like CRM, provisioning, and billing will be done.   10. Service Delivery Platform   A service delivery platform (SDP) provides the architecture for the rapid deployment, provisioning, execution, management, and billing of value added telecom services. SDPs are based on the concept of SOA and layered architecture. They support the delivery of voice, data services, and content in network and device-independent fashion. They allow application developers to aggregate network capabilities, services, and sources of content. SDPs typically contain layers for web services exposure, service application development, and network abstraction.   SOA Reference Architecture   SOA concept is based on the principle of developing reusable business service and building applications by composing those services, instead of building monolithic applications in silos. It’s about bridging the gap between business and IT through a set of business-aligned IT services, using a set of design principles, patterns, and techniques.   In an SOA, resources are made available to participants in a value net, enterprise, line of business (typically spanning multiple applications within an enterprise or across multiple enterprises). It consists of a set of business-aligned IT services that collectively fulfill an organization’s business processes and goals. We can choreograph these services into composite applications and invoke them through standard protocols. SOA, apart from agility and reusability, enables:   ·       The business to specify processes as orchestrations of reusable services ·       Technology agnostic business design, with technology hidden behind service interface ·       A contractual-like interaction between business and IT, based on service SLAs ·       Accountability and governance, better aligned to business services ·       Applications interconnections untangling by allowing access only through service interfaces, reducing the daunting side effects of change ·       Reduced pressure to replace legacy and extended lifetime for legacy applications, through encapsulation in services   ·       A Cloud Computing paradigm, using web services technologies, that makes possible service outsourcing on an on-demand, utility-like, pay-per-usage basis   The following section represents the Reference Architecture of logical view for the Telecom Solution. The new custom built application needs to align with this logical architecture in the long run to achieve EA benefits.   Packaged implementation applications, such as ERP billing applications, need to expose their functions as service providers (as other applications consume) and interact with other applications as service consumers.   COT applications need to expose services through wrappers such as adapters to utilize existing resources and at the same time achieve Enterprise Architecture goal and objectives.   The following are the various layers for Enterprise level deployment of SOA. This diagram captures the abstract view of Enterprise SOA layers and important components of each layer. Layered architecture means decomposition of services such that most interactions occur between adjacent layers. However, there is no strict rule that top layers should not directly communicate with bottom layers.   The diagram below represents the important logical pieces that would result from overall SOA transformation. @font-face { font-family: "Arial"; }@font-face { font-family: "Courier New"; }@font-face { font-family: "Wingdings"; }@font-face { font-family: "Cambria"; }p.MsoNormal, li.MsoNormal, div.MsoNormal { margin: 0cm 0cm 0.0001pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoCaption, li.MsoCaption, div.MsoCaption { margin: 0cm 0cm 10pt; font-size: 9pt; font-family: "Times New Roman"; color: rgb(79, 129, 189); font-weight: bold; }p.MsoListParagraph, li.MsoListParagraph, div.MsoListParagraph { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpFirst, li.MsoListParagraphCxSpFirst, div.MsoListParagraphCxSpFirst { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpMiddle, li.MsoListParagraphCxSpMiddle, div.MsoListParagraphCxSpMiddle { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }p.MsoListParagraphCxSpLast, li.MsoListParagraphCxSpLast, div.MsoListParagraphCxSpLast { margin: 0cm 0cm 0.0001pt 36pt; font-size: 12pt; font-family: "Times New Roman"; }div.Section1 { page: Section1; }ol { margin-bottom: 0cm; }ul { margin-bottom: 0cm; } Figure 3. Enterprise SOA Reference Architecture 1.          Operational System Layer: This layer consists of all packaged applications like CRM, ERP, custom built applications, COTS based applications like Billing, Revenue Management, Fulfilment, and the Enterprise databases that are essential and contribute directly or indirectly to the Enterprise OSS/BSS Transformation.   ERP holds the data of Asset Lifecycle Management, Supply Chain, and Advanced Procurement and Human Capital Management, etc.   CRM holds the data related to Order, Sales, and Marketing, Customer Care, Partner Relationship Management, Loyalty, etc.   Content Management handles Enterprise Search and Query. Billing application consists of the following components:   ·       Collections Management, Customer Billing Management, Invoices, Real-Time Rating, Discounting, and Applying of Charges ·       Enterprise databases will hold both the application and service data, whether structured or unstructured.   MDM - Master data majorly consists of Customer, Order, Product, and Service Data.     2.          Enterprise Component Layer:   This layer consists of the Application Services and Common Services that are responsible for realizing the functionality and maintaining the QoS of the exposed services. This layer uses container-based technologies such as application servers to implement the components, workload management, high availability, and load balancing.   Application Services: This Service Layer enables application, technology, and database abstraction so that the complex accessing logic is hidden from the other service layers. This is a basic service layer, which exposes application functionalities and data as reusable services. The three types of the Application access services are:   ·       Application Access Service: This Service Layer exposes application level functionalities as a reusable service between BSS to BSS and BSS to OSS integration. This layer is enabled using disparate technology such as Web Service, Integration Servers, and Adaptors, etc.   ·       Data Access Service: This Service Layer exposes application data services as a reusable reference data service. This is done via direct interaction with application data. and provides the federated query.   ·       Network Access Service: This Service Layer exposes provisioning layer as a reusable service from OSS to OSS integration. This integration service emphasizes the need for high performance, stateless process flows, and distributed design.   Common Services encompasses management of structured, semi-structured, and unstructured data such as information services, portal services, interaction services, infrastructure services, and security services, etc.   3.          Integration Layer:   This consists of service infrastructure components like service bus, service gateway for partner integration, service registry, service repository, and BPEL processor. Service bus will carry the service invocation payloads/messages between consumers and providers. The other important functions expected from it are itinerary based routing, distributed caching of routing information, transformations, and all qualities of service for messaging-like reliability, scalability, and availability, etc. Service registry will hold all contracts (wsdl) of services, and it helps developers to locate or discover service during design time or runtime.   • BPEL processor would be useful in orchestrating the services to compose a complex business scenario or process. • Workflow and business rules management are also required to support manual triggering of certain activities within business process. based on the rules setup and also the state machine information. Application, data, and service mediation layer typically forms the overall composite application development framework or SOA Framework.   4.          Business Process Layer: These are typically the intermediate services layer and represent Shared Business Process Services. At Enterprise Level, these services are from Customer Management, Order Management, Billing, Finance, and Asset Management application domains.   5.          Access Layer: This layer consists of portals for Enterprise and provides a single view of Enterprise information management and dashboard services.   6.          Channel Layer: This consists of various devices; applications that form part of extended enterprise; browsers through which users access the applications.   7.          Client Layer: This designates the different types of users accessing the enterprise applications. The type of user typically would be an important factor in determining the level of access to applications.   8.          Vertical pieces like management, monitoring, security, and development cut across all horizontal layers Management and monitoring involves all aspects of SOA-like services, SLAs, and other QoS lifecycle processes for both applications and services surrounding SOA governance.     9.          EA Governance, Reference Architecture, Roadmap, Principles, and Best Practices:   EA Governance is important in terms of providing the overall direction to SOA implementation within the enterprise. This involves board-level involvement, in addition to business and IT executives. At a high level, this involves managing the SOA projects implementation, managing SOA infrastructure, and controlling the entire effort through all fine-tuned IT processes in accordance with COBIT (Control Objectives for Information Technology).   Devising tools and techniques to promote reuse culture, and the SOA way of doing things needs competency centers to be established in addition to training the workforce to take up new roles that are suited to SOA journey.   Conclusions   Reference Architectures can serve as the basis for disparate architecture efforts throughout the organization, even if they use different tools and technologies. Reference architectures provide best practices and approaches in the independent way a vendor deals with technology and standards. Reference Architectures model the abstract architectural elements for an enterprise independent of the technologies, protocols, and products that are used to implement an SOA. Telecom enterprises today are facing significant business and technology challenges due to growing competition, a multitude of services, and convergence. Adopting architectural best practices could go a long way in meeting these challenges. The use of SOA-based architecture for communication to each of the external systems like Billing, CRM, etc., in OSS/BSS system has made the architecture very loosely coupled, with greater flexibility. Any change in the external systems would be absorbed at the Integration Layer without affecting the rest of the ecosystem. The use of a Business Process Management (BPM) tool makes the management and maintenance of the business processes easy, with better performance in terms of lead time, quality, and cost. Since the Architecture is based on standards, it will lower the cost of deploying and managing OSS/BSS applications over their lifecycles.

    Read the article

  • What's New in ASP.NET 4

    - by Navaneeth
    The .NET Framework version 4 includes enhancements for ASP.NET 4 in targeted areas. Visual Studio 2010 and Microsoft Visual Web Developer Express also include enhancements and new features for improved Web development. This document provides an overview of many of the new features that are included in the upcoming release. This topic contains the following sections: ASP.NET Core Services ASP.NET Web Forms ASP.NET MVC Dynamic Data ASP.NET Chart Control Visual Web Developer Enhancements Web Application Deployment with Visual Studio 2010 Enhancements to ASP.NET Multi-Targeting ASP.NET Core Services ASP.NET 4 introduces many features that improve core ASP.NET services such as output caching and session state storage. Extensible Output Caching Since the time that ASP.NET 1.0 was released, output caching has enabled developers to store the generated output of pages, controls, and HTTP responses in memory. On subsequent Web requests, ASP.NET can serve content more quickly by retrieving the generated output from memory instead of regenerating the output from scratch. However, this approach has a limitation — generated content always has to be stored in memory. On servers that experience heavy traffic, the memory requirements for output caching can compete with memory requirements for other parts of a Web application. ASP.NET 4 adds extensibility to output caching that enables you to configure one or more custom output-cache providers. Output-cache providers can use any storage mechanism to persist HTML content. These storage options can include local or remote disks, cloud storage, and distributed cache engines. Output-cache provider extensibility in ASP.NET 4 lets you design more aggressive and more intelligent output-caching strategies for Web sites. For example, you can create an output-cache provider that caches the "Top 10" pages of a site in memory, while caching pages that get lower traffic on disk. Alternatively, you can cache every vary-by combination for a rendered page, but use a distributed cache so that the memory consumption is offloaded from front-end Web servers. You create a custom output-cache provider as a class that derives from the OutputCacheProvider type. You can then configure the provider in the Web.config file by using the new providers subsection of the outputCache element For more information and for examples that show how to configure the output cache, see outputCache Element for caching (ASP.NET Settings Schema). For more information about the classes that support caching, see the documentation for the OutputCache and OutputCacheProvider classes. By default, in ASP.NET 4, all HTTP responses, rendered pages, and controls use the in-memory output cache. The defaultProvider attribute for ASP.NET is AspNetInternalProvider. You can change the default output-cache provider used for a Web application by specifying a different provider name for defaultProvider attribute. In addition, you can select different output-cache providers for individual control and for individual requests and programmatically specify which provider to use. For more information, see the HttpApplication.GetOutputCacheProviderName(HttpContext) method. The easiest way to choose a different output-cache provider for different Web user controls is to do so declaratively by using the new providerName attribute in a page or control directive, as shown in the following example: <%@ OutputCache Duration="60" VaryByParam="None" providerName="DiskCache" %> Preloading Web Applications Some Web applications must load large amounts of data or must perform expensive initialization processing before serving the first request. In earlier versions of ASP.NET, for these situations you had to devise custom approaches to "wake up" an ASP.NET application and then run initialization code during the Application_Load method in the Global.asax file. To address this scenario, a new application preload manager (autostart feature) is available when ASP.NET 4 runs on IIS 7.5 on Windows Server 2008 R2. The preload feature provides a controlled approach for starting up an application pool, initializing an ASP.NET application, and then accepting HTTP requests. It lets you perform expensive application initialization prior to processing the first HTTP request. For example, you can use the application preload manager to initialize an application and then signal a load-balancer that the application was initialized and ready to accept HTTP traffic. To use the application preload manager, an IIS administrator sets an application pool in IIS 7.5 to be automatically started by using the following configuration in the applicationHost.config file: <applicationPools> <add name="MyApplicationPool" startMode="AlwaysRunning" /> </applicationPools> Because a single application pool can contain multiple applications, you specify individual applications to be automatically started by using the following configuration in the applicationHost.config file: <sites> <site name="MySite" id="1"> <application path="/" serviceAutoStartEnabled="true" serviceAutoStartProvider="PrewarmMyCache" > <!-- Additional content --> </application> </site> </sites> <!-- Additional content --> <serviceAutoStartProviders> <add name="PrewarmMyCache" type="MyNamespace.CustomInitialization, MyLibrary" /> </serviceAutoStartProviders> When an IIS 7.5 server is cold-started or when an individual application pool is recycled, IIS 7.5 uses the information in the applicationHost.config file to determine which Web applications have to be automatically started. For each application that is marked for preload, IIS7.5 sends a request to ASP.NET 4 to start the application in a state during which the application temporarily does not accept HTTP requests. When it is in this state, ASP.NET instantiates the type defined by the serviceAutoStartProvider attribute (as shown in the previous example) and calls into its public entry point. You create a managed preload type that has the required entry point by implementing the IProcessHostPreloadClient interface, as shown in the following example: public class CustomInitialization : System.Web.Hosting.IProcessHostPreloadClient { public void Preload(string[] parameters) { // Perform initialization. } } After your initialization code runs in the Preload method and after the method returns, the ASP.NET application is ready to process requests. Permanently Redirecting a Page Content in Web applications is often moved over the lifetime of the application. This can lead to links to be out of date, such as the links that are returned by search engines. In ASP.NET, developers have traditionally handled requests to old URLs by using the Redirect method to forward a request to the new URL. However, the Redirect method issues an HTTP 302 (Found) response (which is used for a temporary redirect). This results in an extra HTTP round trip. ASP.NET 4 adds a RedirectPermanent helper method that makes it easy to issue HTTP 301 (Moved Permanently) responses, as in the following example: RedirectPermanent("/newpath/foroldcontent.aspx"); Search engines and other user agents that recognize permanent redirects will store the new URL that is associated with the content, which eliminates the unnecessary round trip made by the browser for temporary redirects. Session State Compression By default, ASP.NET provides two options for storing session state across a Web farm. The first option is a session state provider that invokes an out-of-process session state server. The second option is a session state provider that stores data in a Microsoft SQL Server database. Because both options store state information outside a Web application's worker process, session state has to be serialized before it is sent to remote storage. If a large amount of data is saved in session state, the size of the serialized data can become very large. ASP.NET 4 introduces a new compression option for both kinds of out-of-process session state providers. By using this option, applications that have spare CPU cycles on Web servers can achieve substantial reductions in the size of serialized session state data. You can set this option using the new compressionEnabled attribute of the sessionState element in the configuration file. When the compressionEnabled configuration option is set to true, ASP.NET compresses (and decompresses) serialized session state by using the .NET Framework GZipStreamclass. The following example shows how to set this attribute. <sessionState mode="SqlServer" sqlConnectionString="data source=dbserver;Initial Catalog=aspnetstate" allowCustomSqlDatabase="true" compressionEnabled="true" /> ASP.NET Web Forms Web Forms has been a core feature in ASP.NET since the release of ASP.NET 1.0. Many enhancements have been in this area for ASP.NET 4, such as the following: The ability to set meta tags. More control over view state. Support for recently introduced browsers and devices. Easier ways to work with browser capabilities. Support for using ASP.NET routing with Web Forms. More control over generated IDs. The ability to persist selected rows in data controls. More control over rendered HTML in the FormView and ListView controls. Filtering support for data source controls. Enhanced support for Web standards and accessibility Setting Meta Tags with the Page.MetaKeywords and Page.MetaDescription Properties Two properties have been added to the Page class: MetaKeywords and MetaDescription. These two properties represent corresponding meta tags in the HTML rendered for a page, as shown in the following example: <head id="Head1" runat="server"> <title>Untitled Page</title> <meta name="keywords" content="keyword1, keyword2' /> <meta name="description" content="Description of my page" /> </head> These two properties work like the Title property does, and they can be set in the @ Page directive. For more information, see Page.MetaKeywords and Page.MetaDescription. Enabling View State for Individual Controls A new property has been added to the Control class: ViewStateMode. You can use this property to disable view state for all controls on a page except those for which you explicitly enable view state. View state data is included in a page's HTML and increases the amount of time it takes to send a page to the client and post it back. Storing more view state than is necessary can cause significant decrease in performance. In earlier versions of ASP.NET, you could reduce the impact of view state on a page's performance by disabling view state for specific controls. But sometimes it is easier to enable view state for a few controls that need it instead of disabling it for many that do not need it. For more information, see Control.ViewStateMode. Support for Recently Introduced Browsers and Devices ASP.NET includes a feature that is named browser capabilities that lets you determine the capabilities of the browser that a user is using. Browser capabilities are represented by the HttpBrowserCapabilities object which is stored in the HttpRequest.Browser property. Information about a particular browser's capabilities is defined by a browser definition file. In ASP.NET 4, these browser definition files have been updated to contain information about recently introduced browsers and devices such as Google Chrome, Research in Motion BlackBerry smart phones, and Apple iPhone. Existing browser definition files have also been updated. For more information, see How to: Upgrade an ASP.NET Web Application to ASP.NET 4 and ASP.NET Web Server Controls and Browser Capabilities. The browser definition files that are included with ASP.NET 4 are shown in the following list: •blackberry.browser •chrome.browser •Default.browser •firefox.browser •gateway.browser •generic.browser •ie.browser •iemobile.browser •iphone.browser •opera.browser •safari.browser A New Way to Define Browser Capabilities ASP.NET 4 includes a new feature referred to as browser capabilities providers. As the name suggests, this lets you build a provider that in turn lets you write custom code to determine browser capabilities. In ASP.NET version 3.5 Service Pack 1, you define browser capabilities in an XML file. This file resides in a machine-level folder or an application-level folder. Most developers do not need to customize these files, but for those who do, the provider approach can be easier than dealing with complex XML syntax. The provider approach makes it possible to simplify the process by implementing a common browser definition syntax, or a database that contains up-to-date browser definitions, or even a Web service for such a database. For more information about the new browser capabilities provider, see the What's New for ASP.NET 4 White Paper. Routing in ASP.NET 4 ASP.NET 4 adds built-in support for routing with Web Forms. Routing is a feature that was introduced with ASP.NET 3.5 SP1 and lets you configure an application to use URLs that are meaningful to users and to search engines because they do not have to specify physical file names. This can make your site more user-friendly and your site content more discoverable by search engines. For example, the URL for a page that displays product categories in your application might look like the following example: http://website/products.aspx?categoryid=12 By using routing, you can use the following URL to render the same information: http://website/products/software The second URL lets the user know what to expect and can result in significantly improved rankings in search engine results. the new features include the following: The PageRouteHandler class is a simple HTTP handler that you use when you define routes. You no longer have to write a custom route handler. The HttpRequest.RequestContext and Page.RouteData properties make it easier to access information that is passed in URL parameters. The RouteUrl expression provides a simple way to create a routed URL in markup. The RouteValue expression provides a simple way to extract URL parameter values in markup. The RouteParameter class makes it easier to pass URL parameter values to a query for a data source control (similar to FormParameter). You no longer have to change the Web.config file to enable routing. For more information about routing, see the following topics: ASP.NET Routing Walkthrough: Using ASP.NET Routing in a Web Forms Application How to: Define Routes for Web Forms Applications How to: Construct URLs from Routes How to: Access URL Parameters in a Routed Page Setting Client IDs The new ClientIDMode property makes it easier to write client script that references HTML elements rendered for server controls. Increasing use of Microsoft Ajax makes the need to do this more common. For example, you may have a data control that renders a long list of products with prices and you want to use client script to make a Web service call and update individual prices in the list as they change without refreshing the entire page. Typically you get a reference to an HTML element in client script by using the document.GetElementById method. You pass to this method the value of the id attribute of the HTML element you want to reference. In the case of elements that are rendered for ASP.NET server controls earlier versions of ASP.NET could make this difficult or impossible. You were not always able to predict what id values ASP.NET would generate, or ASP.NET could generate very long id values. The problem was especially difficult for data controls that would generate multiple rows for a single instance of the control in your markup. ASP.NET 4 adds two new algorithms for generating id attributes. These algorithms can generate id attributes that are easier to work with in client script because they are more predictable and that are easier to work with because they are simpler. For more information about how to use the new algorithms, see the following topics: ASP.NET Web Server Control Identification Walkthrough: Making Data-Bound Controls Easier to Access from JavaScript Walkthrough: Making Controls Located in Web User Controls Easier to Access from JavaScript How to: Access Controls from JavaScript by ID Persisting Row Selection in Data Controls The GridView and ListView controls enable users to select a row. In previous versions of ASP.NET, row selection was based on the row index on the page. For example, if you select the third item on page 1 and then move to page 2, the third item on page 2 is selected. In most cases, is more desirable not to select any rows on page 2. ASP.NET 4 supports Persisted Selection, a new feature that was initially supported only in Dynamic Data projects in the .NET Framework 3.5 SP1. When this feature is enabled, the selected item is based on the row data key. This means that if you select the third row on page 1 and move to page 2, nothing is selected on page 2. When you move back to page 1, the third row is still selected. This is a much more natural behavior than the behavior in earlier versions of ASP.NET. Persisted selection is now supported for the GridView and ListView controls in all projects. You can enable this feature in the GridView control, for example, by setting the EnablePersistedSelection property, as shown in the following example: <asp:GridView id="GridView2" runat="server" PersistedSelection="true"> </asp:GridView> FormView Control Enhancements The FormView control is enhanced to make it easier to style the content of the control with CSS. In previous versions of ASP.NET, the FormView control rendered it contents using an item template. This made styling more difficult in the markup because unexpected table row and table cell tags were rendered by the control. The FormView control supports RenderOuterTable, a property in ASP.NET 4. When this property is set to false, as show in the following example, the table tags are not rendered. This makes it easier to apply CSS style to the contents of the control. <asp:FormView ID="FormView1" runat="server" RenderTable="false"> For more information, see FormView Web Server Control Overview. ListView Control Enhancements The ListView control, which was introduced in ASP.NET 3.5, has all the functionality of the GridView control while giving you complete control over the output. This control has been made easier to use in ASP.NET 4. The earlier version of the control required that you specify a layout template that contained a server control with a known ID. The following markup shows a typical example of how to use the ListView control in ASP.NET 3.5. <asp:ListView ID="ListView1" runat="server"> <LayoutTemplate> <asp:PlaceHolder ID="ItemPlaceHolder" runat="server"></asp:PlaceHolder> </LayoutTemplate> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> In ASP.NET 4, the ListView control does not require a layout template. The markup shown in the previous example can be replaced with the following markup: <asp:ListView ID="ListView1" runat="server"> <ItemTemplate> <% Eval("LastName")%> </ItemTemplate> </asp:ListView> For more information, see ListView Web Server Control Overview. Filtering Data with the QueryExtender Control A very common task for developers who create data-driven Web pages is to filter data. This traditionally has been performed by building Where clauses in data source controls. This approach can be complicated, and in some cases the Where syntax does not let you take advantage of the full functionality of the underlying database. To make filtering easier, a new QueryExtender control has been added in ASP.NET 4. This control can be added to EntityDataSource or LinqDataSource controls in order to filter the data returned by these controls. Because the QueryExtender control relies on LINQ, but you do not to need to know how to write LINQ queries to use the query extender. The QueryExtender control supports a variety of filter options. The following lists QueryExtender filter options. Term Definition SearchExpression Searches a field or fields for string values and compares them to a specified string value. RangeExpression Searches a field or fields for values in a range specified by a pair of values. PropertyExpression Compares a specified value to a property value in a field. If the expression evaluates to true, the data that is being examined is returned. OrderByExpression Sorts data by a specified column and sort direction. CustomExpression Calls a function that defines custom filter in the page. For more information, see QueryExtenderQueryExtender Web Server Control Overview. Enhanced Support for Web Standards and Accessibility Earlier versions of ASP.NET controls sometimes render markup that does not conform to HTML, XHTML, or accessibility standards. ASP.NET 4 eliminates most of these exceptions. For details about how the HTML that is rendered by each control meets accessibility standards, see ASP.NET Controls and Accessibility. CSS for Controls that Can be Disabled In ASP.NET 3.5, when a control is disabled (see WebControl.Enabled), a disabled attribute is added to the rendered HTML element. For example, the following markup creates a Label control that is disabled: <asp:Label id="Label1" runat="server"   Text="Test" Enabled="false" /> In ASP.NET 3.5, the previous control settings generate the following HTML: <span id="Label1" disabled="disabled">Test</span> In HTML 4.01, the disabled attribute is not considered valid on span elements. It is valid only on input elements because it specifies that they cannot be accessed. On display-only elements such as span elements, browsers typically support rendering for a disabled appearance, but a Web page that relies on this non-standard behavior is not robust according to accessibility standards. For display-only elements, you should use CSS to indicate a disabled visual appearance. Therefore, by default ASP.NET 4 generates the following HTML for the control settings shown previously: <span id="Label1" class="aspNetDisabled">Test</span> You can change the value of the class attribute that is rendered by default when a control is disabled by setting the DisabledCssClass property. CSS for Validation Controls In ASP.NET 3.5, validation controls render a default color of red as an inline style. For example, the following markup creates a RequiredFieldValidator control: <asp:RequiredFieldValidator ID="RequiredFieldValidator1" runat="server"   ErrorMessage="Required Field" ControlToValidate="RadioButtonList1" /> ASP.NET 3.5 renders the following HTML for the validator control: <span id="RequiredFieldValidator1"   style="color:Red;visibility:hidden;">RequiredFieldValidator</span> By default, ASP.NET 4 does not render an inline style to set the color to red. An inline style is used only to hide or show the validator, as shown in the following example: <span id="RequiredFieldValidator1"   style"visibility:hidden;">RequiredFieldValidator</span> Therefore, ASP.NET 4 does not automatically show error messages in red. For information about how to use CSS to specify a visual style for a validation control, see Validating User Input in ASP.NET Web Pages. CSS for the Hidden Fields Div Element ASP.NET uses hidden fields to store state information such as view state and control state. These hidden fields are contained by a div element. In ASP.NET 3.5, this div element does not have a class attribute or an id attribute. Therefore, CSS rules that affect all div elements could unintentionally cause this div to be visible. To avoid this problem, ASP.NET 4 renders the div element for hidden fields with a CSS class that you can use to differentiate the hidden fields div from others. The new classvalue is shown in the following example: <div class="aspNetHidden"> CSS for the Table, Image, and ImageButton Controls By default, in ASP.NET 3.5, some controls set the border attribute of rendered HTML to zero (0). The following example shows HTML that is generated by the Table control in ASP.NET 3.5: <table id="Table2" border="0"> The Image control and the ImageButton control also do this. Because this is not necessary and provides visual formatting information that should be provided by using CSS, the attribute is not generated in ASP.NET 4. CSS for the UpdatePanel and UpdateProgress Controls In ASP.NET 3.5, the UpdatePanel and UpdateProgress controls do not support expando attributes. This makes it impossible to set a CSS class on the HTMLelements that they render. In ASP.NET 4 these controls have been changed to accept expando attributes, as shown in the following example: <asp:UpdatePanel runat="server" class="myStyle"> </asp:UpdatePanel> The following HTML is rendered for this markup: <div id="ctl00_MainContent_UpdatePanel1" class="expandoclass"> </div> Eliminating Unnecessary Outer Tables In ASP.NET 3.5, the HTML that is rendered for the following controls is wrapped in a table element whose purpose is to apply inline styles to the entire control: FormView Login PasswordRecovery ChangePassword If you use templates to customize the appearance of these controls, you can specify CSS styles in the markup that you provide in the templates. In that case, no extra outer table is required. In ASP.NET 4, you can prevent the table from being rendered by setting the new RenderOuterTable property to false. Layout Templates for Wizard Controls In ASP.NET 3.5, the Wizard and CreateUserWizard controls generate an HTML table element that is used for visual formatting. In ASP.NET 4 you can use a LayoutTemplate element to specify the layout. If you do this, the HTML table element is not generated. In the template, you create placeholder controls to indicate where items should be dynamically inserted into the control. (This is similar to how the template model for the ListView control works.) For more information, see the Wizard.LayoutTemplate property. New HTML Formatting Options for the CheckBoxList and RadioButtonList Controls ASP.NET 3.5 uses HTML table elements to format the output for the CheckBoxList and RadioButtonList controls. To provide an alternative that does not use tables for visual formatting, ASP.NET 4 adds two new options to the RepeatLayout enumeration: UnorderedList. This option causes the HTML output to be formatted by using ul and li elements instead of a table. OrderedList. This option causes the HTML output to be formatted by using ol and li elements instead of a table. For examples of HTML that is rendered for the new options, see the RepeatLayout enumeration. Header and Footer Elements for the Table Control In ASP.NET 3.5, the Table control can be configured to render thead and tfoot elements by setting the TableSection property of the TableHeaderRow class and the TableFooterRow class. In ASP.NET 4 these properties are set to the appropriate values by default. CSS and ARIA Support for the Menu Control In ASP.NET 3.5, the Menu control uses HTML table elements for visual formatting, and in some configurations it is not keyboard-accessible. ASP.NET 4 addresses these problems and improves accessibility in the following ways: The generated HTML is structured as an unordered list (ul and li elements). CSS is used for visual formatting. The menu behaves in accordance with ARIA standards for keyboard access. You can use arrow keys to navigate menu items. (For information about ARIA, see Accessibility in Visual Studio and ASP.NET.) ARIA role and property attributes are added to the generated HTML. (Attributes are added by using JavaScript instead of included in the HTML, to avoid generating HTML that would cause markup validation errors.) Styles for the Menu control are rendered in a style block at the top of the page, instead of inline with the rendered HTML elements. If you want to use a separate CSS file so that you can modify the menu styles, you can set the Menu control's new IncludeStyleBlock property to false, in which case the style block is not generated. Valid XHTML for the HtmlForm Control In ASP.NET 3.5, the HtmlForm control (which is created implicitly by the <form runat="server"> tag) renders an HTML form element that has both name and id attributes. The name attribute is deprecated in XHTML 1.1. Therefore, this control does not render the name attribute in ASP.NET 4. Maintaining Backward Compatibility in Control Rendering An existing ASP.NET Web site might have code in it that assumes that controls are rendering HTML the way they do in ASP.NET 3.5. To avoid causing backward compatibility problems when you upgrade the site to ASP.NET 4, you can have ASP.NET continue to generate HTML the way it does in ASP.NET 3.5 after you upgrade the site. To do so, you can set the controlRenderingCompatibilityVersion attribute of the pages element to "3.5" in the Web.config file of an ASP.NET 4 Web site, as shown in the following example: <system.web>   <pages controlRenderingCompatibilityVersion="3.5"/> </system.web> If this setting is omitted, the default value is the same as the version of ASP.NET that the Web site targets. (For information about multi-targeting in ASP.NET, see .NET Framework Multi-Targeting for ASP.NET Web Projects.) ASP.NET MVC ASP.NET MVC helps Web developers build compelling standards-based Web sites that are easy to maintain because it decreases the dependency among application layers by using the Model-View-Controller (MVC) pattern. MVC provides complete control over the page markup. It also improves testability by inherently supporting Test Driven Development (TDD). Web sites created using ASP.NET MVC have a modular architecture. This allows members of a team to work independently on the various modules and can be used to improve collaboration. For example, developers can work on the model and controller layers (data and logic), while the designer work on the view (presentation). For tutorials, walkthroughs, conceptual content, code samples, and a complete API reference, see ASP.NET MVC 2. Dynamic Data Dynamic Data was introduced in the .NET Framework 3.5 SP1 release in mid-2008. This feature provides many enhancements for creating data-driven applications, such as the following: A RAD experience for quickly building a data-driven Web site. Automatic validation that is based on constraints defined in the data model. The ability to easily change the markup that is generated for fields in the GridView and DetailsView controls by using field templates that are part of your Dynamic Data project. For ASP.NET 4, Dynamic Data has been enhanced to give developers even more power for quickly building data-driven Web sites. For more information, see ASP.NET Dynamic Data Content Map. Enabling Dynamic Data for Individual Data-Bound Controls in Existing Web Applications You can use Dynamic Data features in existing ASP.NET Web applications that do not use scaffolding by enabling Dynamic Data for individual data-bound controls. Dynamic Data provides the presentation and data layer support for rendering these controls. When you enable Dynamic Data for data-bound controls, you get the following benefits: Setting default values for data fields. Dynamic Data enables you to provide default values at run time for fields in a data control. Interacting with the database without creating and registering a data model. Automatically validating the data that is entered by the user without writing any code. For more information, see Walkthrough: Enabling Dynamic Data in ASP.NET Data-Bound Controls. New Field Templates for URLs and E-mail Addresses ASP.NET 4 introduces two new built-in field templates, EmailAddress.ascx and Url.ascx. These templates are used for fields that are marked as EmailAddress or Url using the DataTypeAttribute attribute. For EmailAddress objects, the field is displayed as a hyperlink that is created by using the mailto: protocol. When users click the link, it opens the user's e-mail client and creates a skeleton message. Objects typed as Url are displayed as ordinary hyperlinks. The following example shows how to mark fields. [DataType(DataType.EmailAddress)] public object HomeEmail { get; set; } [DataType(DataType.Url)] public object Website { get; set; } Creating Links with the DynamicHyperLink Control Dynamic Data uses the new routing feature that was added in the .NET Framework 3.5 SP1 to control the URLs that users see when they access the Web site. The new DynamicHyperLink control makes it easy to build links to pages in a Dynamic Data site. For information, see How to: Create Table Action Links in Dynamic Data Support for Inheritance in the Data Model Both the ADO.NET Entity Framework and LINQ to SQL support inheritance in their data models. An example of this might be a database that has an InsurancePolicy table. It might also contain CarPolicy and HousePolicy tables that have the same fields as InsurancePolicy and then add more fields. Dynamic Data has been modified to understand inherited objects in the data model and to support scaffolding for the inherited tables. For more information, see Walkthrough: Mapping Table-per-Hierarchy Inheritance in Dynamic Data. Support for Many-to-Many Relationships (Entity Framework Only) The Entity Framework has rich support for many-to-many relationships between tables, which is implemented by exposing the relationship as a collection on an Entity object. New field templates (ManyToMany.ascx and ManyToMany_Edit.ascx) have been added to provide support for displaying and editing data that is involved in many-to-many relationships. For more information, see Working with Many-to-Many Data Relationships in Dynamic Data. New Attributes to Control Display and Support Enumerations The DisplayAttribute has been added to give you additional control over how fields are displayed. The DisplayNameAttribute attribute in earlier versions of Dynamic Data enabled you to change the name that is used as a caption for a field. The new DisplayAttribute class lets you specify more options for displaying a field, such as the order in which a field is displayed and whether a field will be used as a filter. The attribute also provides independent control of the name that is used for the labels in a GridView control, the name that is used in a DetailsView control, the help text for the field, and the watermark used for the field (if the field accepts text input). The EnumDataTypeAttribute class has been added to let you map fields to enumerations. When you apply this attribute to a field, you specify an enumeration type. Dynamic Data uses the new Enumeration.ascx field template to create UI for displaying and editing enumeration values. The template maps the values from the database to the names in the enumeration. Enhanced Support for Filters Dynamic Data 1.0 had built-in filters for Boolean columns and foreign-key columns. The filters did not let you specify the order in which they were displayed. The new DisplayAttribute attribute addresses this by giving you control over whether a column appears as a filter and in what order it will be displayed. An additional enhancement is that filtering support has been rewritten to use the new QueryExtender feature of Web Forms. This lets you create filters without requiring knowledge of the data source control that the filters will be used with. Along with these extensions, filters have also been turned into template controls, which lets you add new ones. Finally, the DisplayAttribute class mentioned earlier allows the default filter to be overridden, in the same way that UIHint allows the default field template for a column to be overridden. For more information, see Walkthrough: Filtering Rows in Tables That Have a Parent-Child Relationship and QueryableFilterRepeater. ASP.NET Chart Control The ASP.NET chart server control enables you to create ASP.NET pages applications that have simple, intuitive charts for complex statistical or financial analysis. The chart control supports the following features: Data series, chart areas, axes, legends, labels, titles, and more. Data binding. Data manipulation, such as copying, splitting, merging, alignment, grouping, sorting, searching, and filtering. Statistical formulas and financial formulas. Advanced chart appearance, such as 3-D, anti-aliasing, lighting, and perspective. Events and customizations. Interactivity and Microsoft Ajax. Support for the Ajax Content Delivery Network (CDN), which provides an optimized way for you to add Microsoft Ajax Library and jQuery scripts to your Web applications. For more information, see Chart Web Server Control Overview. Visual Web Developer Enhancements The following sections provide information about enhancements and new features in Visual Studio 2010 and Visual Web Developer Express. The Web page designer in Visual Studio 2010 has been enhanced for better CSS compatibility, includes additional support for HTML and ASP.NET markup snippets, and features a redesigned version of IntelliSense for JScript. Improved CSS Compatibility The Visual Web Developer designer in Visual Studio 2010 has been updated to improve CSS 2.1 standards compliance. The designer better preserves HTML source code and is more robust than in previous versions of Visual Studio. HTML and JScript Snippets In the HTML editor, IntelliSense auto-completes tag names. The IntelliSense Snippets feature auto-completes whole tags and more. In Visual Studio 2010, IntelliSense snippets are supported for JScript, alongside C# and Visual Basic, which were supported in earlier versions of Visual Studio. Visual Studio 2010 includes over 200 snippets that help you auto-complete common ASP.NET and HTML tags, including required attributes (such as runat="server") and common attributes specific to a tag (such as ID, DataSourceID, ControlToValidate, and Text). You can download additional snippets, or you can write your own snippets that encapsulate the blocks of markup that you or your team use for common tasks. For more information on HTML snippets, see Walkthrough: Using HTML Snippets. JScript IntelliSense Enhancements In Visual 2010, JScript IntelliSense has been redesigned to provide an even richer editing experience. IntelliSense now recognizes objects that have been dynamically generated by methods such as registerNamespace and by similar techniques used by other JavaScript frameworks. Performance has been improved to analyze large libraries of script and to display IntelliSense with little or no processing delay. Compatibility has been significantly increased to support almost all third-party libraries and to support diverse coding styles. Documentation comments are now parsed as you type and are immediately leveraged by IntelliSense. Web Application Deployment with Visual Studio 2010 For Web application projects, Visual Studio now provides tools that work with the IIS Web Deployment Tool (Web Deploy) to automate many processes that had to be done manually in earlier versions of ASP.NET. For example, the following tasks can now be automated: Creating an IIS application on the destination computer and configuring IIS settings. Copying files to the destination computer. Changing Web.config settings that must be different in the destination environment. Propagating changes to data or data structures in SQL Server databases that are used by the Web application. For more information about Web application deployment, see ASP.NET Deployment Content Map. Enhancements to ASP.NET Multi-Targeting ASP.NET 4 adds new features to the multi-targeting feature to make it easier to work with projects that target earlier versions of the .NET Framework. Multi-targeting was introduced in ASP.NET 3.5 to enable you to use the latest version of Visual Studio without having to upgrade existing Web sites or Web services to the latest version of the .NET Framework. In Visual Studio 2008, when you work with a project targeted for an earlier version of the .NET Framework, most features of the development environment adapt to the targeted version. However, IntelliSense displays language features that are available in the current version, and property windows display properties available in the current version. In Visual Studio 2010, only language features and properties available in the targeted version of the .NET Framework are shown. For more information about multi-targeting, see the following topics: .NET Framework Multi-Targeting for ASP.NET Web Projects ASP.NET Side-by-Side Execution Overview How to: Host Web Applications That Use Different Versions of the .NET Framework on the Same Server How to: Deploy Web Site Projects Targeted for Earlier Versions of the .NET Framework

    Read the article

  • West Wind WebSurge - an easy way to Load Test Web Applications

    - by Rick Strahl
    A few months ago on a project the subject of load testing came up. We were having some serious issues with a Web application that would start spewing SQL lock errors under somewhat heavy load. These sort of errors can be tough to catch, precisely because they only occur under load and not during typical development testing. To replicate this error more reliably we needed to put a load on the application and run it for a while before these SQL errors would flare up. It’s been a while since I’d looked at load testing tools, so I spent a bit of time looking at different tools and frankly didn’t really find anything that was a good fit. A lot of tools were either a pain to use, didn’t have the basic features I needed, or are extravagantly expensive. In  the end I got frustrated enough to build an initially small custom load test solution that then morphed into a more generic library, then gained a console front end and eventually turned into a full blown Web load testing tool that is now called West Wind WebSurge. I got seriously frustrated looking for tools every time I needed some quick and dirty load testing for an application. If my aim is to just put an application under heavy enough load to find a scalability problem in code, or to simply try and push an application to its limits on the hardware it’s running I shouldn’t have to have to struggle to set up tests. It should be easy enough to get going in a few minutes, so that the testing can be set up quickly so that it can be done on a regular basis without a lot of hassle. And that was the goal when I started to build out my initial custom load tester into a more widely usable tool. If you’re in a hurry and you want to check it out, you can find more information and download links here: West Wind WebSurge Product Page Walk through Video Download link (zip) Install from Chocolatey Source on GitHub For a more detailed discussion of the why’s and how’s and some background continue reading. How did I get here? When I started out on this path, I wasn’t planning on building a tool like this myself – but I got frustrated enough looking at what’s out there to think that I can do better than what’s available for the most common simple load testing scenarios. When we ran into the SQL lock problems I mentioned, I started looking around what’s available for Web load testing solutions that would work for our whole team which consisted of a few developers and a couple of IT guys both of which needed to be able to run the tests. It had been a while since I looked at tools and I figured that by now there should be some good solutions out there, but as it turns out I didn’t really find anything that fit our relatively simple needs without costing an arm and a leg… I spent the better part of a day installing and trying various load testing tools and to be frank most of them were either terrible at what they do, incredibly unfriendly to use, used some terminology I couldn’t even parse, or were extremely expensive (and I mean in the ‘sell your liver’ range of expensive). Pick your poison. There are also a number of online solutions for load testing and they actually looked more promising, but those wouldn’t work well for our scenario as the application is running inside of a private VPN with no outside access into the VPN. Most of those online solutions also ended up being very pricey as well – presumably because of the bandwidth required to test over the open Web can be enormous. When I asked around on Twitter what people were using– I got mostly… crickets. Several people mentioned Visual Studio Load Test, and most other suggestions pointed to online solutions. I did get a bunch of responses though with people asking to let them know what I found – apparently I’m not alone when it comes to finding load testing tools that are effective and easy to use. As to Visual Studio, the higher end skus of Visual Studio and the test edition include a Web load testing tool, which is quite powerful, but there are a number of issues with that: First it’s tied to Visual Studio so it’s not very portable – you need a VS install. I also find the test setup and terminology used by the VS test runner extremely confusing. Heck, it’s complicated enough that there’s even a Pluralsight course on using the Visual Studio Web test from Steve Smith. And of course you need to have one of the high end Visual Studio Skus, and those are mucho Dinero ($$$) – just for the load testing that’s rarely an option. Some of the tools are ultra extensive and let you run analysis tools on the target serves which is useful, but in most cases – just plain overkill and only distracts from what I tend to be ultimately interested in: Reproducing problems that occur at high load, and finding the upper limits and ‘what if’ scenarios as load is ramped up increasingly against a site. Yes it’s useful to have Web app instrumentation, but often that’s not what you’re interested in. I still fondly remember early days of Web testing when Microsoft had the WAST (Web Application Stress Tool) tool, which was rather simple – and also somewhat limited – but easily allowed you to create stress tests very quickly. It had some serious limitations (mainly that it didn’t work with SSL),  but the idea behind it was excellent: Create tests quickly and easily and provide a decent engine to run it locally with minimal setup. You could get set up and run tests within a few minutes. Unfortunately, that tool died a quiet death as so many of Microsoft’s tools that probably were built by an intern and then abandoned, even though there was a lot of potential and it was actually fairly widely used. Eventually the tools was no longer downloadable and now it simply doesn’t work anymore on higher end hardware. West Wind Web Surge – Making Load Testing Quick and Easy So I ended up creating West Wind WebSurge out of rebellious frustration… The goal of WebSurge is to make it drop dead simple to create load tests. It’s super easy to capture sessions either using the built in capture tool (big props to Eric Lawrence, Telerik and FiddlerCore which made that piece a snap), using the full version of Fiddler and exporting sessions, or by manually or programmatically creating text files based on plain HTTP headers to create requests. I’ve been using this tool for 4 months now on a regular basis on various projects as a reality check for performance and scalability and it’s worked extremely well for finding small performance issues. I also use it regularly as a simple URL tester, as it allows me to quickly enter a URL plus headers and content and test that URL and its results along with the ability to easily save one or more of those URLs. A few weeks back I made a walk through video that goes over most of the features of WebSurge in some detail: Note that the UI has slightly changed since then, so there are some UI improvements. Most notably the test results screen has been updated recently to a different layout and to provide more information about each URL in a session at a glance. The video and the main WebSurge site has a lot of info of basic operations. For the rest of this post I’ll talk about a few deeper aspects that may be of interest while also giving a glance at how WebSurge works. Session Capturing As you would expect, WebSurge works with Sessions of Urls that are played back under load. Here’s what the main Session View looks like: You can create session entries manually by individually adding URLs to test (on the Request tab on the right) and saving them, or you can capture output from Web Browsers, Windows Desktop applications that call services, your own applications using the built in Capture tool. With this tool you can capture anything HTTP -SSL requests and content from Web pages, AJAX calls, SOAP or REST services – again anything that uses Windows or .NET HTTP APIs. Behind the scenes the capture tool uses FiddlerCore so basically anything you can capture with Fiddler you can also capture with Web Surge Session capture tool. Alternately you can actually use Fiddler as well, and then export the captured Fiddler trace to a file, which can then be imported into WebSurge. This is a nice way to let somebody capture session without having to actually install WebSurge or for your customers to provide an exact playback scenario for a given set of URLs that cause a problem perhaps. Note that not all applications work with Fiddler’s proxy unless you configure a proxy. For example, .NET Web applications that make HTTP calls usually don’t show up in Fiddler by default. For those .NET applications you can explicitly override proxy settings to capture those requests to service calls. The capture tool also has handy optional filters that allow you to filter by domain, to help block out noise that you typically don’t want to include in your requests. For example, if your pages include links to CDNs, or Google Analytics or social links you typically don’t want to include those in your load test, so by capturing just from a specific domain you are guaranteed content from only that one domain. Additionally you can provide url filters in the configuration file – filters allow to provide filter strings that if contained in a url will cause requests to be ignored. Again this is useful if you don’t filter by domain but you want to filter out things like static image, css and script files etc. Often you’re not interested in the load characteristics of these static and usually cached resources as they just add noise to tests and often skew the overall url performance results. In my testing I tend to care only about my dynamic requests. SSL Captures require Fiddler Note, that in order to capture SSL requests you’ll have to install the Fiddler’s SSL certificate. The easiest way to do this is to install Fiddler and use its SSL configuration options to get the certificate into the local certificate store. There’s a document on the Telerik site that provides the exact steps to get SSL captures to work with Fiddler and therefore with WebSurge. Session Storage A group of URLs entered or captured make up a Session. Sessions can be saved and restored easily as they use a very simple text format that simply stored on disk. The format is slightly customized HTTP header traces separated by a separator line. The headers are standard HTTP headers except that the full URL instead of just the domain relative path is stored as part of the 1st HTTP header line for easier parsing. Because it’s just text and uses the same format that Fiddler uses for exports, it’s super easy to create Sessions by hand manually or under program control writing out to a simple text file. You can see what this format looks like in the Capture window figure above – the raw captured format is also what’s stored to disk and what WebSurge parses from. The only ‘custom’ part of these headers is that 1st line contains the full URL instead of the domain relative path and Host: header. The rest of each header are just plain standard HTTP headers with each individual URL isolated by a separator line. The format used here also uses what Fiddler produces for exports, so it’s easy to exchange or view data either in Fiddler or WebSurge. Urls can also be edited interactively so you can modify the headers easily as well: Again – it’s just plain HTTP headers so anything you can do with HTTP can be added here. Use it for single URL Testing Incidentally I’ve also found this form as an excellent way to test and replay individual URLs for simple non-load testing purposes. Because you can capture a single or many URLs and store them on disk, this also provides a nice HTTP playground where you can record URLs with their headers, and fire them one at a time or as a session and see results immediately. It’s actually an easy way for REST presentations and I find the simple UI flow actually easier than using Fiddler natively. Finally you can save one or more URLs as a session for later retrieval. I’m using this more and more for simple URL checks. Overriding Cookies and Domains Speaking of HTTP headers – you can also overwrite cookies used as part of the options. One thing that happens with modern Web applications is that you have session cookies in use for authorization. These cookies tend to expire at some point which would invalidate a test. Using the Options dialog you can actually override the cookie: which replaces the cookie for all requests with the cookie value specified here. You can capture a valid cookie from a manual HTTP request in your browser and then paste into the cookie field, to replace the existing Cookie with the new one that is now valid. Likewise you can easily replace the domain so if you captured urls on west-wind.com and now you want to test on localhost you can do that easily easily as well. You could even do something like capture on store.west-wind.com and then test on localhost/store which would also work. Running Load Tests Once you’ve created a Session you can specify the length of the test in seconds, and specify the number of simultaneous threads to run each session on. Sessions run through each of the URLs in the session sequentially by default. One option in the options list above is that you can also randomize the URLs so each thread runs requests in a different order. This avoids bunching up URLs initially when tests start as all threads run the same requests simultaneously which can sometimes skew the results of the first few minutes of a test. While sessions run some progress information is displayed: By default there’s a live view of requests displayed in a Console-like window. On the bottom of the window there’s a running total summary that displays where you’re at in the test, how many requests have been processed and what the requests per second count is currently for all requests. Note that for tests that run over a thousand requests a second it’s a good idea to turn off the console display. While the console display is nice to see that something is happening and also gives you slight idea what’s happening with actual requests, once a lot of requests are processed, this UI updating actually adds a lot of CPU overhead to the application which may cause the actual load generated to be reduced. If you are running a 1000 requests a second there’s not much to see anyway as requests roll by way too fast to see individual lines anyway. If you look on the options panel, there is a NoProgressEvents option that disables the console display. Note that the summary display is still updated approximately once a second so you can always tell that the test is still running. Test Results When the test is done you get a simple Results display: On the right you get an overall summary as well as breakdown by each URL in the session. Both success and failures are highlighted so it’s easy to see what’s breaking in your load test. The report can be printed or you can also open the HTML document in your default Web Browser for printing to PDF or saving the HTML document to disk. The list on the right shows you a partial list of the URLs that were fired so you can look in detail at the request and response data. The list can be filtered by success and failure requests. Each list is partial only (at the moment) and limited to a max of 1000 items in order to render reasonably quickly. Each item in the list can be clicked to see the full request and response data: This particularly useful for errors so you can quickly see and copy what request data was used and in the case of a GET request you can also just click the link to quickly jump to the page. For non-GET requests you can find the URL in the Session list, and use the context menu to Test the URL as configured including any HTTP content data to send. You get to see the full HTTP request and response as well as a link in the Request header to go visit the actual page. Not so useful for a POST as above, but definitely useful for GET requests. Finally you can also get a few charts. The most useful one is probably the Request per Second chart which can be accessed from the Charts menu or shortcut. Here’s what it looks like:   Results can also be exported to JSON, XML and HTML. Keep in mind that these files can get very large rather quickly though, so exports can end up taking a while to complete. Command Line Interface WebSurge runs with a small core load engine and this engine is plugged into the front end application I’ve shown so far. There’s also a command line interface available to run WebSurge from the Windows command prompt. Using the command line you can run tests for either an individual URL (similar to AB.exe for example) or a full Session file. By default when it runs WebSurgeCli shows progress every second showing total request count, failures and the requests per second for the entire test. A silent option can turn off this progress display and display only the results. The command line interface can be useful for build integration which allows checking for failures perhaps or hitting a specific requests per second count etc. It’s also nice to use this as quick and dirty URL test facility similar to the way you’d use Apache Bench (ab.exe). Unlike ab.exe though, WebSurgeCli supports SSL and makes it much easier to create multi-URL tests using either manual editing or the WebSurge UI. Current Status Currently West Wind WebSurge is still in Beta status. I’m still adding small new features and tweaking the UI in an attempt to make it as easy and self-explanatory as possible to run. Documentation for the UI and specialty features is also still a work in progress. I plan on open-sourcing this product, but it won’t be free. There’s a free version available that provides a limited number of threads and request URLs to run. A relatively low cost license  removes the thread and request limitations. Pricing info can be found on the Web site – there’s an introductory price which is $99 at the moment which I think is reasonable compared to most other for pay solutions out there that are exorbitant by comparison… The reason code is not available yet is – well, the UI portion of the app is a bit embarrassing in its current monolithic state. The UI started as a very simple interface originally that later got a lot more complex – yeah, that never happens, right? Unless there’s a lot of interest I don’t foresee re-writing the UI entirely (which would be ideal), but in the meantime at least some cleanup is required before I dare to publish it :-). The code will likely be released with version 1.0. I’m very interested in feedback. Do you think this could be useful to you and provide value over other tools you may or may not have used before? I hope so – it already has provided a ton of value for me and the work I do that made the development worthwhile at this point. You can leave a comment below, or for more extensive discussions you can post a message on the West Wind Message Board in the WebSurge section Microsoft MVPs and Insiders get a free License If you’re a Microsoft MVP or a Microsoft Insider you can get a full license for free. Send me a link to your current, official Microsoft profile and I’ll send you a not-for resale license. Send any messages to [email protected]. Resources For more info on WebSurge and to download it to try it out, use the following links. West Wind WebSurge Home Download West Wind WebSurge Getting Started with West Wind WebSurge Video© Rick Strahl, West Wind Technologies, 2005-2014Posted in ASP.NET   Tweet !function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0];if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src="//platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs"); (function() { var po = document.createElement('script'); po.type = 'text/javascript'; po.async = true; po.src = 'https://apis.google.com/js/plusone.js'; var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(po, s); })();

    Read the article

  • SQL SERVER – T-SQL Errors and Reactions – Demo – SQL in Sixty Seconds #005 – Video

    - by pinaldave
    We got tremendous response to video of Error and Reaction of SQL in Sixty Seconds #002. We all have idea how SQL Server reacts when it encounters T-SQL Error. Today Rick explains the same in quick seconds. After watching this I felt confident to answer talk about SQL Server’s reaction to Error. We received many request to follow up video of the earlier video. Many requested T-SQL demo of the concept. In today’s SQL in Sixty Seconds Rick Morelan has presented T-SQL demo of very visual reach concept of SQL Server Errors and Reaction. More on Errors: Explanation of TRY…CATCH and ERROR Handling Create New Log file without Server Restart Tips from the SQL Joes 2 Pros Development Series – SQL Server Error Messages I encourage you to submit your ideas for SQL in Sixty Seconds. We will try to accommodate as many as we can. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, SQLServer, T SQL, Video

    Read the article

  • Understanding and Controlling Parallel Query Processing in SQL Server

    Data warehousing and general reporting applications tend to be CPU intensive because they need to read and process a large number of rows. To facilitate quick data processing for queries that touch a large amount of data, Microsoft SQL Server exploits the power of multiple logical processors to provide parallel query processing operations such as parallel scans. Through extensive testing, we have learned that, for most large queries that are executed in a parallel fashion, SQL Server can deliver linear or nearly linear response time speedup as the number of logical processors increases. However, some queries in high parallelism scenarios perform suboptimally. There are also some parallelism issues that can occur in a multi-user parallel query workload. This white paper describes parallel performance problems you might encounter when you run such queries and workloads, and it explains why these issues occur. In addition, it presents how data warehouse developers can detect these issues, and how they can work around them or mitigate them.

    Read the article

  • Using Microsoft Office 2007 with E-Business Suite Release 12

    - by Steven Chan
    Many products in the Oracle E-Business Suite offer optional integrations with Microsoft Office and Microsoft Projects.  For example, some EBS products can export tabular reports to Microsoft Excel.  Some EBS products integrate directly with Microsoft products, and others work through the Applications Desktop Integrator (WebADI and ADI) as an intermediary.These EBS integrations have historically been documented in their respective product-specific documentation.  In other words, if an EBS product in the Oracle Financials family supported an integration with, say, Microsoft Excel, it was up to the product team to document that in the Oracle Financials documentation.Some EBS systems administrators have found the process of hunting through the various product-specific documents for Office-related information to be a bit difficult.  In response to your Service Requests and emails, we've released a new document that consolidates and summarises all patching and configuration requirements for EBS products with MS Office integration points in a single place:Using Microsoft Office 2007 with Oracle E-Business Suite 11i and R12 (Note 1072807.1)

    Read the article

< Previous Page | 265 266 267 268 269 270 271 272 273 274 275 276  | Next Page >