Search Results

Search found 31586 results on 1264 pages for 'custom result'.

Page 570/1264 | < Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >

  • BizTalk &ndash; Routing failure on Delivery Notifications (BizTalk 2006 R2 to 2013)

    - by S.E.R.
    Originally posted on: http://geekswithblogs.net/SERivas/archive/2013/11/11/biztalk-routing-failure-on-delivery-notifications.aspxThis is a detailed explanation of a something I posted a few month ago on stackoverflow, concerning a weird behavior (a bug, really…) of the delivery notifications in BizTalk. Reminder: what are delivery notifications Mechanism BizTalk has the ability to automatically publish positive acknowledgments (ACK) when it has succeeded transmitting a message or negative acknowledgments (NACK) in case of a transmission failure. Orchestrations can use delivery notifications to subscribe to those ACKs and NACKs in order to know if a message sent on a one-way send port has been successfully transmitted. Delivery Notifications can be “activated” in two ways: The most common and easy way is to set the Delivery Notification property of a logical send port (in the orchestration designer) to Transmitted: Another way is to set the BTS.AckRequired context property of the message to be sent to true: NOTE: fundamentally, those methods are strictly equivalent since the fact of setting the Delivery Notification to Transmitted on the send port only tells BizTalk the BTS.AckRequired context property has to be set to true on the outgoing message. Related context properties ACKs and NACKs have a common set of propoted context properties, which are : Propriété Description AckType Equals ACK when successful or NACK otherwise AckID MessageID of the message concerned by the acknowledgment AckOwnerID InstanceID of the instance associated with the acknowledgment AckSendPortID ID of the send port AckSendPortName Name of the send port AckOutboundTransportLocation URI of the send port AckReceivePortID ID of the port the message came from AckReceivePortName Name of the port the message came from AckInboundTransportLocation URI of the port the message came from Detailed behavior The way Delivery Notifications are handled by BizTalk is peculiar compared to the standard behavior of the Message Box: if no active subscription exists for the acknowledgment, it is simply discarded. The direct consequence of this is that there can be no routing failure for an acknowledgment, and an acknowledgment cannot be suspended. Moreover, when a message is sent to a send port where Delivery Notification = Transmitted, a correlation set is initialized and a correlation token is attached to the message (Context property: CorrelationToken). This correlation token will also be attached to the acknowledgment. So when the acknowledgment is issued, it is automatically routed to the source orchestration. Finally, when a NACK is received by the source orchestration, a DeliveryFailureException is thrown, which can be caught in Catch section. Context of the problem Consider this scenario: In an orchestration, Delivery Notifications are activated on a One-Way send port In case of a transmission failure, the messaging instance is suspended and the orchestration catches an exception (DeliveryFailureException). When the exception is caught, the orchestration does some logging and then terminates (thanks to a Terminate shape). So that leaves only the suspended messaging instance, waiting to be resumed. Symptoms Once the problem that caused the transmission failure is solved, the messaging instance is resumed. Considering what was said in the reminder, we would expect the instance to complete, leaving no active or suspended instance. Nevertheless, the result is that the messaging instance is once more suspended, this time because of a routing failure: The routing failure report shows that the suspended message has the following attached properties: Explanation Those properties clearly indicate that the message being suspended is an acknowledgment (ACK in this case), which was published in the message box and was supended because no subscribers were found. This makes sense, since the source orchestration was terminated before we resumed the messaging instance. So its subscription to the acknowledgments was no longer active when the ACK was published, which explains the routing failure. But this behavior is in direct contradiction with what was said earlier: an acknowledgment must be discarded when no subscriber is found and therefore should not be suspended. Cause It is indeed an outright bug, which appeared with the SP1 of BizTalk 2006 R2 and was never corrected since then: not in the next 4 CUs, not in BizTalk 2009, not in 2010 and not event in 2013 – though I haven’t tested CU1 and CU2 for this last edition, but I bet there is nothing to be expected from those CUs (on this particular point). Side effects This bug can have pretty nasty side effects: this behavior can be propagated to other ports, due to routing mechanisms. For instance: you have configured the ESB Toolkit and have activated the “Enable routing failure for failed messages”. The result will be that the ESB Exception SQL send port will also try and publish ACKs or NACKs concerning its own messaging instances. In itself, this is already messy, but remember that those acknowledgments will also have the source correlation token attached to them… See how far it goes? Well, actually there is more: in SQL send ports, transactions will be rolled back because of the routing failure (I guess it also happens with other adapters - like Oracle, but I haven’t tested them). Again, think of what happens when the send port is the ESB Exception send port: your BizTalk box is going mad, but you have no idea since no exception can be written in the exception database! All of this can be tricky to diagnose, I can tell you that… Solution There is no real solution, only a work-around, but it won’t solve all of the problems and side effects. The idea is to create an orchestration which subscribes to all acknowledgments. That is to say: The message type of the incoming message will be XmlDocument The BTS.AckType property exists The logical receive port will use direct binding By doing so, all acknowledgments will be consumed by an instance of this orchestration, thus avoiding the routing failure. Here is an example of what this orchestration could look like: In order not to pollute the HAT and the DTA Db (after all, this orchestration is only meant to be a palliative to some faulty internal BizTalk mechanism, so there should be no trace of its execution), all tracking must be deactivated:

    Read the article

  • The Chemistry of Fireworks [Video]

    - by Jason Fitzpatrick
    Fireworks are the dazzling and loud end result of complex chemical process. Watch this video to see the chemistry behind a fireworks display explained by none other than the father of modern pyrotechnics, John Conkling. Courtesy of Bytesize Science: From the sizzle of the fuse to the boom and burst of colors, this video brings you all of the exciting sights and sounds of Fourth of July fireworks, plus a little chemical knowhow. The video features John A. Conkling, Ph.D., who literally wrote the book on fireworks — he is the author of The Chemistry of Pyrotechnics, Basic Principles and Theory. Conkling shows how the familiar rockets and other neat products that light up the night sky all represent chemistry in action. [via Geeks Are Sexy] How to Use an Xbox 360 Controller On Your Windows PC Download the Official How-To Geek Trivia App for Windows 8 How to Banish Duplicate Photos with VisiPic

    Read the article

  • Understanding #DAX Query Plans for #powerpivot and #tabular

    - by Marco Russo (SQLBI)
    Alberto Ferrari wrote a very interesting white paper about DAX query plans. We published it on a page where we'll gather articles and tools about DAX query plans: http://www.sqlbi.com/topics/query-plans/I reviewed the paper and this is the result of many months of study - we know that we just scratched the surface of this topic, also because we still don't have enough information about internal behavior of many of the operators contained in a query plan. However, by reading the paper you will start reading a query plan and you will understand how it works the optimization found by Chris Webb one month ago to the events-in-progress scenario. The white paper also contains a more optimized query (10 time faster), even if the performance depends on data distribution and the best choice really depends on the data you have. Now you should be curious enough to read the paper until the end, because the more optimized query is the last example in the paper!

    Read the article

  • Help to understand the abstract factory pattern

    - by Chobeat
    I'm learning the 23 design patterns of the GoF. I think I've found a way to understand and simplify how the Abstract Factory works but I would like to know if this is a correct assumption or if I am wrong. What I want to know is if we can see the result of the Abstract Factory method as a matrix of possible products where there's a Product for every "Concrete Factory" x "AbstractProduct" where the Concrete Factory is a single implementation among the implementations of an AbstractFactory and an AbstractProduct is an interface among the interfaces to create Products. Is this correct or am I missing something?

    Read the article

  • What is the typical example of old school website design ?

    - by Pierre 303
    I want to build a website for a retro thing that was popular in the mid 90s (beginning of the commercial internet). So I want use old designs that was very popular at that time. The first thing that comes to my mind was those "under construction" animated gifs. People often put animated gifs everywhere. But also those awful repeating backgrounds. So yes, I want my website to look exactly like in the mid nineties ;) (please suggest practical and usable features, I guess an Java Applet menu would not work today, or saying on the bottom that this website is optimized for Netscape 3) EDIT: for those that wants to see the result: Retrology

    Read the article

  • Implementing a bit shift using AND, NOT, ADD [closed]

    - by fdart17
    I'm implementing a 16-bit left bit shift by r bits, and I only have access to AND, NOT and ADD. There are 3 condition codes, negative, zero and positive, which are set when you use any of these operations. How I went about it was : (1) And the number with 1000 0000 0000 0000 to set condition codes to positive if the most significant bit is 1. (2) Add the number with itself. This shifts bits one to the left. (3) If the MSB was 1, add 1 to the result. (4) Loop threw (1)-(3) r times. I'm wondering if anyone has any hints to some more efficient methods? Thanks!

    Read the article

  • how does server communication work in a flash game with a php backend

    - by Tim Rogers
    I am trying to create a browser game using actionscript/flash. Currently, I'm trying to understand how I would go about creating a back-end which interfaced with my MySQL database. As far as I understand, If I create a php file on a webserver called test.php and then navigate to a webpage hosted on the server eg. www.example.com/test, the php script will run and display the result in my browser. This would use http. Is this how communication between client and server usually works in a flash game? for example, if the game needed to query the db. Would actionscript have to essentially invoke the url of the php script that would execute the query? it could then parse the data and use it. If this is the case, then is JSON considered a good way to transfer data over http?

    Read the article

  • Who knows the value of global variables in the qt qtscript script to access the global variable to change the global variable value; [closed]

    - by dawntrees
    Who knows the value of global variables in the qt qtscript script to access the global variable to change the global variable value; forexample int gVar=0; int main(int argc, char *argv[]) { QScriptEngine engine; QScriptValue varValue = m_engine-newVariant(gVar); engine.globalObject().setProperty("gVar", varValue); QScriptValue result = m_engine->evaluate("gVar=100;"); qDebug()<<"gVar================"<<gVar; return 0; } Why gVar = 0 and not equal to 100; how can we make gVar equal to 100(gVar=100) Who can help group I appreciate it, thanks!

    Read the article

  • jqGrid - dynamically load different drop down values for different rows depending on another column value

    - by Renso
    Goal: As we all know the jqGrid examples in the demo and the Wiki always refer to static values for drop down boxes. This of course is a personal preference but in dynamic design these values should be populated from the database/xml file, etc, ideally JSON formatted. Can you do this in jqGrid, yes, but with some custom coding which we will briefly show below (refer to some of my other blog entries for a more detailed discussion on this topic). What you CANNOT do in jqGrid, referrign here up and to version 3.8.x, is to load different drop down values for different rows in the jqGrid. Well, not without some trickery, which is what this discussion is about. Issue: Of course the issue is that jqGrid has been designed for high performance and thus I have no issue with them loading a  reference to a single drop down values list for every column. This way if you have 500 rows or one, each row only refers to a single list for that particuolar column. Nice! SO how easy would it be to simply traverse the grid once loaded on gridComplete or loadComplete and simply load the select tag's options from scratch, via ajax, from memory variable, hard coded etc? Impossible! Since their is no embedded SELECT tag within each cell containing the drop down values (remeber it only has a reference to that list in memory), all you will see when you inspect the cell prior to clicking on it, or even before and on beforeEditCell, is an empty <TD></TD>. When trying to load that list via a click event on that cell will temporarily load the list but jqGrid's last internal callback event will remove it and replace it with the old one, and you are back to square one. Solution: Yes, after spending a few hours on this found a solution to the problem that does not require any updates to jqGrid source code, thank GOD! Before we get into the coding details, the solution here can of course be customized to suite your specific needs, this one loads the entire drop down list that would be needed across all rows once into global variable. I then parse this object that contains all the properties I need to filter the rows depending on which ones I want the user to see based off of another cell value in that row. This only happens when clicking the cell, so no performance penalty. You may of course to load it via ajax when the user clicks the cell, but I found it more effecient to load the entire list as part of jqGrid's normal editoptions: { multiple: false, value: listingStatus } colModel options which again keeps only a reference to the sinlge list, no duplciation. Lets get into the meat and potatoes of it.         var acctId = $('#Id').val();         var data = $.ajax({ url: $('#ajaxGetAllMaterialsTrackingLookupDataUrl').val(), data: { accountId: acctId }, dataType: 'json', async: false, success: function(data, result) { if (!result) alert('Failure to retrieve the Alert related lookup data.'); } }).responseText;         var lookupData = eval('(' + data + ')');         var listingCategory = lookupData.ListingCategory;         var listingStatus = lookupData.ListingStatus;         var catList = '{';         $(lookupData.ListingCategory).each(function() {             catList += this.Id + ':"' + this.Name + '",';         });         catList += '}';         var lastsel;         var ignoreAlert = true;         $(item)         .jqGrid({             url: listURL,             postData: '',             datatype: "local",             colNames: ['Id', 'Name', 'Commission<br />Rep', 'Business<br />Group', 'Order<br />Date', 'Edit', 'TBD', 'Month', 'Year', 'Week', 'Product', 'Product<br />Type', 'Online/<br />Magazine', 'Materials', 'Special<br />Placement', 'Logo', 'Image', 'Text', 'Contact<br />Info', 'Everthing<br />In', 'Category', 'Status'],             colModel: [                 { name: 'Id', index: 'Id', hidden: true, hidedlg: true },                 { name: 'AccountName', index: 'AccountName', align: "left", resizable: true, search: true, width: 100 },                 { name: 'OnlineName', index: 'OnlineName', align: 'left', sortable: false, width: 80 },                 { name: 'ListingCategoryName', index: 'ListingCategoryName', width: 85, editable: true, hidden: false, edittype: "select", editoptions: { multiple: false, value: eval('(' + catList + ')') }, editrules: { required: false }, formatoptions: { disabled: false} }             ],             jsonReader: {                 root: "List",                 page: "CurrentPage",                 total: "TotalPages",                 records: "TotalRecords",                 userdata: "Errors",                 repeatitems: false,                 id: "0"             },             rowNum: $rows,             rowList: [10, 20, 50, 200, 500, 1000, 2000],             imgpath: jQueryImageRoot,             pager: $(item + 'Pager'),             shrinkToFit: true,             width: 1455,             recordtext: 'Traffic lines',             sortname: 'OrderDate',             viewrecords: true,             sortorder: "asc",             altRows: true,             cellEdit: true,             cellsubmit: "remote",             cellurl: editURL + '?rows=' + $rows + '&page=1',             loadComplete: function() {               },             gridComplete: function() {             },             loadError: function(xhr, st, err) {             },             afterEditCell: function(rowid, cellname, value, iRow, iCol) {                 var select = $(item).find('td.edit-cell select');                 $(item).find('td.edit-cell select option').each(function() {                     var option = $(this);                     var optionId = $(this).val();                     $(lookupData.ListingCategory).each(function() {                         if (this.Id == optionId) {                                                       if (this.OnlineName != $(item).getCell(rowid, 'OnlineName')) {                                 option.remove();                                 return false;                             }                         }                     });                 });             },             search: true,             searchdata: {},             caption: "List of all Traffic lines",             editurl: editURL + '?rows=' + $rows + '&page=1',             hiddengrid: hideGrid   Here is the JSON data returned via the ajax call during the jqGrid function call above (NOTE it must be { async: false}: {"ListingCategory":[{"Id":29,"Name":"Document Imaging & Management","OnlineName":"RF Globalnet"} ,{"Id":1,"Name":"Ancillary Department Hardware","OnlineName":"Healthcare Technology Online"} ,{"Id":2,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":3,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":4,"Name":"Asset Tracking","OnlineName":"Healthcare Technology Online"} ,{"Id":5,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":6,"Name":"Document Imaging & Management","OnlineName":"Healthcare Technology Online"} ,{"Id":7,"Name":"EMR/EHR Software","OnlineName":"Healthcare Technology Online"}]} I only need the Id and Name for the drop down list, but the third column in the JSON object is important, it is the only that I match up with the OnlineName in the jqGrid column, and then in the loop during afterEditCell simply remove the ones I don't want the user to see. That's it!

    Read the article

  • Notebook overheating

    - by user71372
    I'm asking this question because I've tried many tips to solve they don't work and it sounds like a non-fixed bug ubuntu. My problem is with overheating. I've recently installed Ubuntu Precise 12.04 LTS alongside with MS Windows 7 on my notebook Samsung 530U. I'm using both via dual-boot mode. I've no heating problem with MS Win 7 and the fan speed is normal even with long run utilization. However, when booting with Ubuntu and after short time, the PC got very hot and the fan was running at max speed. I installed a tool called Jupiter, I put it in "Power Saving" mode but no result. Now, I avoid using ubuntu because I fear it'll damage my all new notebook. Please can you give me a "FINAL" fix of this problem (lot of answers exist but I don't know the more accurate and efficient one). Thank you in advance.

    Read the article

  • Google Cache showing wrong URL

    - by Sathiya Kumar
    I searched the cache details of the URL http://property.sulekha.com/pune-properties but the Google Cache showing details for property.sulekha.com. I don't know why it's showing like this. Not only for http://property.sulekha.com/pune-properties but also for all the Indian city relates URL's like http://property.sulekha.com/chennai-properties , http://property.sulekha.com/mumbai-properties , http://property.sulekha.com/kolkata-properties etc. Even i don't find these urls in the Google search result. If i search Chennai properties in Google, i find property.sulekha.com and not http://property.sulekha.com/chennai-properties . Why its happening like this? Please let me know

    Read the article

  • Deleting ASP.NET application subdirectories causes application recycle!

    - by geekrutherford
    This may not be news to most people, but was definitely a shock to me!   In the .NET 2.x framework a "feature" was implemented where by an ASP.NET application is automatically recycled if any subdirectory is deleted. This was apparently implemented to prevent stale content from appearing on a site.   The unfortunate side effect of this "feature" is that when using the "InProc" model for session management, all session data is lost if a subdirectory is deleted.   For those who progammatically may be adding/deleting directories within their application as inherent functionality, this causes a rather large problem.   The solution? Create your folder(s) which may be programmatically deleted outside of the root folder for the application. Alernatively, utilize a file based structure vs. folders since deleting files does not result in the same issue.

    Read the article

  • Mixing Forms and Token Authentication in a single ASP.NET Application (the Details)

    - by Your DisplayName here!
    The scenario described in my last post works because of the design around HTTP modules in ASP.NET. Authentication related modules (like Forms authentication and WIF WS-Fed/Sessions) typically subscribe to three events in the pipeline – AuthenticateRequest/PostAuthenticateRequest for pre-processing and EndRequest for post-processing (like making redirects to a login page). In the pre-processing stage it is the modules’ job to determine the identity of the client based on incoming HTTP details (like a header, cookie, form post) and set HttpContext.User and Thread.CurrentPrincipal. The actual page (in the ExecuteHandler event) “sees” the identity that the last module has set. So in our case there are three modules in effect: FormsAuthenticationModule (AuthenticateRequest, EndRequest) WSFederationAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest, EndRequest) SessionAuthenticationModule (AuthenticateRequest, PostAuthenticateRequest) So let’s have a look at the different scenario we have when mixing Forms auth and WS-Federation. Anoymous request to unprotected resource This is the easiest case. Since there is no WIF session cookie or a FormsAuth cookie, these modules do nothing. The WSFed module creates an anonymous ClaimsPrincipal and calls the registered ClaimsAuthenticationManager (if any) to transform it. The result (by default an anonymous ClaimsPrincipal) gets set. Anonymous request to FormsAuth protected resource This is the scenario where an anonymous user tries to access a FormsAuth protected resource for the first time. The principal is anonymous and before the page gets rendered, the Authorize attribute kicks in. The attribute determines that the user needs authentication and therefor sets a 401 status code and ends the request. Now execution jumps to the EndRequest event, where the FormsAuth module takes over. The module then converts the 401 to a redirect (302) to the forms login page. If authentication is successful, the login page sets the FormsAuth cookie.   FormsAuth authenticated request to a FormsAuth protected resource Now a FormsAuth cookie is present, which gets validated by the FormsAuth module. This cookie gets turned into a GenericPrincipal/FormsIdentity combination. The WS-Fed module turns the principal into a ClaimsPrincipal and calls the registered ClaimsAuthenticationManager. The outcome of that gets set on the context. Anonymous request to STS protected resource This time the anonymous user tries to access an STS protected resource (a controller decorated with the RequireTokenAuthentication attribute). The attribute determines that the user needs STS authentication by checking the authentication type on the current principal. If this is not Federation, the redirect to the STS will be made. After successful authentication at the STS, the STS posts the token back to the application (using WS-Federation syntax). Postback from STS authentication After the postback, the WS-Fed module finds the token response and validates the contained token. If successful, the token gets transformed by the ClaimsAuthenticationManager, and the outcome is a) stored in a session cookie, and b) set on the context. STS authenticated request to an STS protected resource This time the WIF Session authentication module kicks in because it can find the previously issued session cookie. The module re-hydrates the ClaimsPrincipal from the cookie and sets it.     FormsAuth and STS authenticated request to a protected resource This is kind of an odd case – e.g. the user first authenticated using Forms and after that using the STS. This time the FormsAuth module does its work, and then afterwards the session module stomps over the context with the session principal. In other words, the STS identity wins.   What about roles? A common way to set roles in ASP.NET is to use the role manager feature. There is a corresponding HTTP module for that (RoleManagerModule) that handles PostAuthenticateRequest. Does this collide with the above combinations? No it doesn’t! When the WS-Fed module turns existing principals into a ClaimsPrincipal (like it did with the FormsIdentity), it also checks for RolePrincipal (which is the principal type created by role manager), and turns the roles in role claims. Nice! But as you can see in the last scenario above, this might result in unnecessary work, so I would rather recommend consolidating all role work (and other claims transformations) into the ClaimsAuthenticationManager. In there you can check for the authentication type of the incoming principal and act accordingly. HTH

    Read the article

  • How to Make a 9 Layer Density Column [Video]

    - by Jason Fitzpatrick
    Density columns, layers of varying density liquid in a glass cylinder, are nothing new in the world of science demonstrations, but this nine layer one with seven floating objects is something to see. Courtesy of Steve Spangler Science, the experiment goes above and beyond the traditional five layer column by adding another four layers and sinking objects of varying density into the column. The end result is a colorful demonstration of the varying densities of liquids and solids. [via Boing Boing] How To Get a Better Wireless Signal and Reduce Wireless Network Interference How To Troubleshoot Internet Connection Problems 7 Ways To Free Up Hard Disk Space On Windows

    Read the article

  • Could someone break this nasty habit of mine please?

    - by MimiEAM
    I recently graduated in cs and was mostly unsatisfied since I realized that I received only a basic theoretical approach in a wide range of subjects (which is what college is supposed to do but still...) . Anyway I took the habit of spending a lot of time looking for implementations of concepts and upon finding those I will used them as guides to writing my own implementation of those concepts just for fun. But now I feel like the only way I can fully understand a new concept is by trying to implement from scratch no matter how unoptimized the result may be. Anyway this behavior lead me to choose by default the hard way, that is time consuming instead of using a nicely written library until I hit my head again a huge wall and then try to find a library that works for my purpose.... Does anyone else do that and why? It seems so weird why would anyone (including me) do that ? Is it a bad practice ? and if so how can i stop doing that ?

    Read the article

  • Problem installing from Ubuntu 12.04.1 LTS 32bit cd

    - by John Smith
    Older laptop currently running xp, only 128mb ram too. Is 128 just too small? But, 20+ gigs free hard drive and it's been defragmented. When I try to install Ubuntu from a CD I get the screen that says ubuntu and has the four red dots and then eventually goes blank and I just hear hard drive noises. Stays this way indefinitely (shut it off after half a day). Burned another cd, at slow writing speed too, and dl is from Ubuntu and get same result. Any help much appreciated!

    Read the article

  • New blog post shows immediately in google search results where as other HTML content takes time, why?

    - by Jayapal Chandran
    I have a blog which has been active for 3 years. Recently I posted an article and it immediately appeared in google search. Maybe 5 to 10 minutes. A point to note is I was logged into my google account. Maybe google checked my post's when I searched since I am logged in? Yet I logged out and used another browser and searched again with that specific text and it appeared in google search result. How did this happen? However, if I make an article in static HTML and publish, it takes time. (I assume this is the case but I haven't tested much). Yet tested a few cases after updating it in my sitemap xml. How does google search work for a blog and other content?

    Read the article

  • Advice: How to overcome the "accent" barrier in cross-geographical teams ?

    - by shan23
    I'm an Indian working in a MNC. As a result, I often have to attend(and contribute) to meetings where I have to listen to people who have a pronounced American accent. Some are still understandable, but a couple of people I have interact with speak such a different form of English, I mostly have to guess at what they are saying. When I ask them to clarify, they often speak the same sentence in the same tenor/speed, so my net gain is zero. My question is, how to politely put it across that due to their accent, I can't understand a thing, and may they please speak slowly and a bit clearly ? Some people might take it a bit personally, since "everyone else" is understanding them perfectly...and I don't want to cause offense at all. Any ideas ?

    Read the article

  • Null Values And The T-SQL IN Operator

    - by Jesse
    I came across some unexpected behavior while troubleshooting a failing test the other day that took me long enough to figure out that I thought it was worth sharing here. I finally traced the failing test back to a SELECT statement in a stored procedure that was using the IN t-sql operator to exclude a certain set of values. Here’s a very simple example table to illustrate the issue: Customers CustomerId INT, NOT NULL, Primary Key CustomerName nvarchar(100) NOT NULL SalesRegionId INT NULL   The ‘SalesRegionId’ column contains a number representing the sales region that the customer belongs to. This column is nullable because new customers get created all the time but assigning them to sales regions is a process that is handled by a regional manager on a periodic basis. For the purposes of this example, the Customers table currently has the following rows: CustomerId CustomerName SalesRegionId 1 Customer A 1 2 Customer B NULL 3 Customer C 4 4 Customer D 2 5 Customer E 3   How could we write a query against this table for all customers that are NOT in sales regions 2 or 4? You might try something like this: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE SalesRegionId NOT IN (2,4)   Will this work? In short, no; at least not in the way that you might expect. Here’s what this query will return given the example data we’re working with: CustomerId CustomerName SalesRegionId 1 Customer A 1 5 Customer E 5   I was expecting that this query would also return ‘Customer B’, since that customer has a NULL SalesRegionId. In my mind, having a customer with no sales region should be included in a set of customers that are not in sales regions 2 or 4.When I first started troubleshooting my issue I made note of the fact that this query should probably be re-written without the NOT IN clause, but I didn’t suspect that the NOT IN clause was actually the source of the issue. This particular query was only one minor piece in a much larger process that was being exercised via an automated integration test and I simply made a poor assumption that the NOT IN would work the way that I thought it should. So why doesn’t this work the way that I thought it should? From the MSDN documentation on the t-sql IN operator: If the value of test_expression is equal to any value returned by subquery or is equal to any expression from the comma-separated list, the result value is TRUE; otherwise, the result value is FALSE. Using NOT IN negates the subquery value or expression. The key phrase out of that quote is, “… is equal to any expression from the comma-separated list…”. The NULL SalesRegionId isn’t included in the NOT IN because of how NULL values are handled in equality comparisons. From the MSDN documentation on ANSI_NULLS: The SQL-92 standard requires that an equals (=) or not equal to (<>) comparison against a null value evaluates to FALSE. When SET ANSI_NULLS is ON, a SELECT statement using WHERE column_name = NULL returns zero rows even if there are null values in column_name. A SELECT statement using WHERE column_name <> NULL returns zero rows even if there are nonnull values in column_name. In fact, the MSDN documentation on the IN operator includes the following blurb about using NULL values in IN sub-queries or expressions that are used with the IN operator: Any null values returned by subquery or expression that are compared to test_expression using IN or NOT IN return UNKNOWN. Using null values in together with IN or NOT IN can produce unexpected results. If I were to include a ‘SET ANSI_NULLS OFF’ command right above my SELECT statement I would get ‘Customer B’ returned in the results, but that’s definitely not the right way to deal with this. We could re-write the query to explicitly include the NULL value in the WHERE clause: 1: SELECT 2: CustomerId, 3: CustomerName, 4: SalesRegionId 5: FROM Customers 6: WHERE (SalesRegionId NOT IN (2,4) OR SalesRegionId IS NULL)   This query works and properly includes ‘Customer B’ in the results, but I ultimately opted to re-write the query using a LEFT OUTER JOIN against a table variable containing all of the values that I wanted to exclude because, in my case, there could potentially be several hundred values to be excluded. If we were to apply the same refactoring to our simple sales region example we’d end up with: 1: DECLARE @regionsToIgnore TABLE (IgnoredRegionId INT) 2: INSERT @regionsToIgnore values (2),(4) 3:  4: SELECT 5: c.CustomerId, 6: c.CustomerName, 7: c.SalesRegionId 8: FROM Customers c 9: LEFT OUTER JOIN @regionsToIgnore r ON r.IgnoredRegionId = c.SalesRegionId 10: WHERE r.IgnoredRegionId IS NULL By performing a LEFT OUTER JOIN from Customers to the @regionsToIgnore table variable we can simply exclude any rows where the IgnoredRegionId is null, as those represent customers that DO NOT appear in the ignored regions list. This approach will likely perform better if the number of sales regions to ignore gets very large and it also will correctly include any customers that do not yet have a sales region.

    Read the article

  • (int) Math.floor(x / TILESIZE) or just (int) (x / TILESIZE)

    - by Aidan Mueller
    I have a Array that stores my map data and my Tiles are 64X64. Sometimes I need to convert from pixels to units of tiles. So I was doing: int x int y public void myFunction() { getTile((int) Math.floor(x / 64), (int) Math.floor(y / 64)).doOperation(); } But I discovered by using (I'm using java BTW) System.out.println((int) (1 / 1.5)) that converting to an int automatically rounds down. This means that I can replace the (int) Math.floor with just x / 64. But if I run this on a different OS do you think it might give a different result? I'm just afraid there might be some case where this would round up and not down. Should I keep doing it the way I was and maybe make a function like convert(int i) to make it easier? Or is it OK to just do x / 64?

    Read the article

  • audio controls in xfce 4.8

    - by Peter
    I am seeing several questions similar to mine, but none of the answers are sufficient. I am pretty green with Ubuntu, so here goes: I was just automatically upgraded to xfce 4.8 for Ubuntu studio. The volume control no longer works in my panel. When I launch 'mixer' I don't see any settings, either. When I try to run "linux audio configuration" I get an error: JACK can only be configured with a loaded and stopped studio. Please create a new studio or load and stop an existing one. I understand that I can change the volume using command line, but I can't understand why I got upgraded to something that fails on basic features. I much less likely to recommend ubuntu to others as a result. thanks!

    Read the article

  • Proper way to use a RenderTarget2D to draw multiple textures?

    - by TheBroodian
    In the process of trying to resolve a split screen issue, I've been trying to use a RenderTarget2D to draw a portion of my scene to a Texture2D, and then again to another Texture2D, but the end result of both Texture2D's is coming out the same. Can anybody tell me what I'm doing wrong? Texture2D camera1Render; Texture2D camera2Render; GraphicsDevice.SetRenderTarget(RenderTarget); GraphicsDevice.Clear(Color.Transparent); map.Draw(mapDisplayDevice, Camera1, new Location(0, 0), false); camera1Render = RenderTarget; GraphicsDevice.Clear(Color.Transparent); map.Draw(mapDisplayDevice, Camera2, new Location(0, 0), false); camera2Render = RenderTarget; SetRenderTarget(null);

    Read the article

  • CPU Usage in Very Large Coherence Clusters

    - by jpurdy
    When sizing Coherence installations, one of the complicating factors is that these installations (by their very nature) tend to be application-specific, with some being large, memory-intensive caches, with others acting as I/O-intensive transaction-processing platforms, and still others performing CPU-intensive calculations across the data grid. Regardless of the primary resource requirements, Coherence sizing calculations are inherently empirical, in that there are so many permutations that a simple spreadsheet approach to sizing is rarely optimal (though it can provide a good starting estimate). So we typically recommend measuring actual resource usage (primarily CPU cycles, network bandwidth and memory) at a given load, and then extrapolating from those measurements. Of course there may be multiple types of load, and these may have varying degrees of correlation -- for example, an increased request rate may drive up the number of objects "pinned" in memory at any point, but the increase may be less than linear if those objects are naturally shared by concurrent requests. But for most reasonably-designed applications, a linear resource model will be reasonably accurate for most levels of scale. However, at extreme scale, sizing becomes a bit more complicated as certain cluster management operations -- while very infrequent -- become increasingly critical. This is because certain operations do not naturally tend to scale out. In a small cluster, sizing is primarily driven by the request rate, required cache size, or other application-driven metrics. In larger clusters (e.g. those with hundreds of cluster members), certain infrastructure tasks become intensive, in particular those related to members joining and leaving the cluster, such as introducing new cluster members to the rest of the cluster, or publishing the location of partitions during rebalancing. These tasks have a strong tendency to require all updates to be routed via a single member for the sake of cluster stability and data integrity. Fortunately that member is dynamically assigned in Coherence, so it is not a single point of failure, but it may still become a single point of bottleneck (until the cluster finishes its reconfiguration, at which point this member will have a similar load to the rest of the members). The most common cause of scaling issues in large clusters is disabling multicast (by configuring well-known addresses, aka WKA). This obviously impacts network usage, but it also has a large impact on CPU usage, primarily since the senior member must directly communicate certain messages with every other cluster member, and this communication requires significant CPU time. In particular, the need to notify the rest of the cluster about membership changes and corresponding partition reassignments adds stress to the senior member. Given that portions of the network stack may tend to be single-threaded (both in Coherence and the underlying OS), this may be even more problematic on servers with poor single-threaded performance. As a result of this, some extremely large clusters may be configured with a smaller number of partitions than ideal. This results in the size of each partition being increased. When a cache server fails, the other servers will use their fractional backups to recover the state of that server (and take over responsibility for their backed-up portion of that state). The finest granularity of this recovery is a single partition, and the single service thread can not accept new requests during this recovery. Ordinarily, recovery is practically instantaneous (it is roughly equivalent to the time required to iterate over a set of backup backing map entries and move them to the primary backing map in the same JVM). But certain factors can increase this duration drastically (to several seconds): large partitions, sufficiently slow single-threaded CPU performance, many or expensive indexes to rebuild, etc. The solution of course is to mitigate each of those factors but in many cases this may be challenging. Larger clusters also lead to the temptation to place more load on the available hardware resources, spreading CPU resources thin. As an example, while we've long been aware of how garbage collection can cause significant pauses, it usually isn't viewed as a major consumer of CPU (in terms of overall system throughput). Typically, the use of a concurrent collector allows greater responsiveness by minimizing pause times, at the cost of reducing system throughput. However, at a recent engagement, we were forced to turn off the concurrent collector and use a traditional parallel "stop the world" collector to reduce CPU usage to an acceptable level. In summary, there are some less obvious factors that may result in excessive CPU consumption in a larger cluster, so it is even more critical to test at full scale, even though allocating sufficient hardware may often be much more difficult for these large clusters.

    Read the article

  • Suggested ways of collecting 1000's of links to MSM media articles

    - by Matt
    I'm currently running a modified Wordpress site that is uniquely designed to simply publish links to other sites, similar to The Drudge Report. Right now I have a few dozen Google Alerts setup and go through each result manually and if it matches a few niche keywords I'm working with, then I add a link to the article to my site. I do the manual checking because sometimes Google Alerts finds links to sites that belong to service providers, organizations, or products, but all I want are mainstream news articles. So my question is there a more efficient - and ideally automated - way to go about performing highly qualitative searches and aggregating such links?

    Read the article

  • Hardware from Oracle, Pricing for Education (HOPE) Program: New version now available!

    - by Cinzia Mascanzoni
    With HOPE Version 5, Oracle offers education institutions even more unmatched savings on its award-winning systems products making it more affordable for educational institutions to create scalable, high-performing, and low TCO teaching and learning environments. With special discounts for you, on selected Sun products from Oracle, the net result is that you can assist your Resellers in reducing the impact on their customers' budget in two ways: • Lower the total cost for technology acquisition of systems and hardware, for the end user • Reduce the environmental impact of the educational institutions served by your Resellers, by running and maintaining a lower cost, more efficient infrastructure Start today to take advantage of the new release of this exciting program from Oracle. Check the EMEA VAD Resource Center for a description of the products and discounts offered to you and to find links to more detailed information about these Sun products.

    Read the article

< Previous Page | 566 567 568 569 570 571 572 573 574 575 576 577  | Next Page >