Search Results

Search found 11403 results on 457 pages for 'logic named joe'.

Page 80/457 | < Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >

  • Building a Distributed Commerce Infrastructure in the Cloud using Azure and Commerce Server

    - by Lewis Benge
    One of the biggest questions I routinely get asked is how scalable Commerce Server is. Of course the text book answer is the product has been around for 10 years, powers some of the largest e-Commerce websites in the world, so it scales horizontally extremely well. One argument however though is what if you can't predict the growth of demand required of your Commerce Platform, or need the ability to scale up during busy seasons such as Christmas for a retail environment but are hesitant on maintaining the infrastructure on a year-round basis? The obvious answer is to utilise the many elasticated cloud infrastructure providers that are establishing themselves in the ever-growing market, the problem however is Commerce Server is still product which has a legacy tightly coupled dependency on Windows and IIS components. Commerce Server 2009 codename "R2" however introduced to the concept of an n-tier deployment of Microsoft Commerce Server, meaning you are no longer tied to core objects API but instead have serializable Commerce Entity objects, and business logic allowing for Commerce Server to now be built into a WCF-based SOA architecture. Presentation layers no-longer now need to remain on the same physical machine as the application server, meaning you can now build the user experience into multiple-technologies and host them in multiple places – leveraging the transport benefits that a WCF service may bring, such as message queuing, security, and multiple end-points. All of this logic will still need to remain in your internal infrastructure, for two reasons. Firstly cloud based computing infrastructure does not support PCI security requirements, and secondly even though many of the legacy Commerce Server dependencies have been abstracted away within this version of the application, it is still not a fully supported to be deployed exclusively into the cloud. If you do wish to benefit from the scalability of the cloud however, you can still achieve a great Commerce Server and Azure setup by utilising both the Azure App Fabric in terms of the service bus, and authentication services and Windows Azure to host any online presence you may require. The architecture would be something similar to this: This setup would allow you to construct your Commerce Services as part of your on-site infrastructure. These services would contain all of the channels custom business logic, and provide the overall interface back into the underlying Commerce Server components. It would be recommended that services are constructed around the specific business domain of the application, which based on your business model would usually consist of separate services around Catalogue, Orders, Search, Profiles, and Marketing. The App Fabric service bus is then used to abstract and aggregate further the services, making them available to the cloud and subsequently secured by App Fabrics authentication services. These services are now available for consumption by any client, using any supported technology – not just .NET. Thus meaning you are now able to construct apps for IPhone, integrate with Java based POS Devices, and any many other potential uses. This aggregation is useful, and forms the basis of the further strategy around diversifying and enhancing the e-Commerce experience, but also provides the foundation for the scalability we want to gain from utilising a cloud-based application platform. The Windows Azure application platform is Microsoft solution to benefiting from the true economies of scale in terms of the elasticity of the cloud. Just before the launch of the Azure Platform – Domino's pizza actually managed to run their whole SuperBowl operation from the scalability of Windows Azure, and simply switching back to their traditional operation the next day with no residual infrastructure costs. The platform also natively can subscribe to services and messages exposed within the AppFabric service bus, making it an ideal solution to build and deploy a presentation layer which will need to support of scalable infrastructure – such as a high demand public facing e-Commerce portal, or a promotion element of a brand. Windows Azure has excellent support for ASP.NET, including its own caching providers meaning expensive operations such as catalogue queries can persist in memory on the application server, reducing the demand on internal infrastructure and prioritising it for more business critical operations such as receiving orders and processing payments. Windows Azure also supports other languages too, meaning utilising this approach you can technically build a Commerce Server presentation layer in Java, PHP, or Ruby – or equally in ASP.NET or Silverlight without having to change any of the underlying business or Commerce Server implementation. This SOA-style architecture is one of the primary differentiators for Commerce Server as a product in the e-Commerce market, and now with the introduction of a WCF capability in Commerce Server 2009/2009 R2 the opportunities for extensibility of the both the user experience, and integration into third parties, are drastically increased, all with no effect to the underlying channel logic. So if you are looking at deployment options for your e-Commerce application to help support demand in a cost effective way. I would highly recommend you consider looking at Windows Azure, and if you have any questions in-particular about this style of deployment, please feel free to get in touch!

    Read the article

  • jQuery .each() with multiple selectors - skip, then .empty() elements

    - by joe
    I'd like to remove all matching elements, but skip the first instance of each match: // Works as expected: removes all but first instance of .a jQuery ('.a', '#scope') .each ( function (i) { if (i > 0) jQuery (this).empty(); }); // Expected: removal of all but first instance of .a and .b // Result: removal of *all* instances of both .a and .b jQuery ('.a, .b', '#scope') .each ( function (i) { if (i > 1) jQuery (this).empty(); }); <div id="scope"> <!-- Want to keep the first instance of .a and .b --> <div class="a">[bla]</span> <div class="b">[bla]</span> <!-- Want to remove all the others --> <div class="a">[bla]</span> <div class="b">[bla]</span <div class="a">[bla]</span> <div class="b">[bla]</span ... </div> Any suggestions? Using jQuery() rather than $() because of conflict with 'legacy' code Using .empty() because .a contains JS I'd like to disable Stuck with jQuery 1.2.3 Thanks you!

    Read the article

  • Modal MessageBox for ASP.NET without jQuery

    - by Joe
    I have an assembly with custom ASP.NET server controls that is used in several, mostly in-house, ASP.NET 2.0 applications. The server controls use simple modal popup messageboxes, which are currently implemented using the javascript alert and confirm functions. I want to release a new version of this assembly that uses a better solution for messageboxes, including support for Yes/No buttons. The appearance would be something like a simplified version of the Ajax Control Toolkit ModalPopup extender (sample here). My constraints are that this should be as easy as possible to integrate into existing ASP.NET 2.0 applications without introducing new dependencies: ideally just drop in a new version of the assembly, that contains all the javascript it needs as an embedded resource, and possibly a CSS file. Because of this constraint, I am not considering solutions I've seen that use jquery, or the ASP.NET Ajax Control Toolkit, which appears to require adding elements to pages that use the extenders (ScriptManagers and the like). Any recommendations?

    Read the article

  • Problem with openssl_get_privatekey returning false

    - by Joe Corkery
    I am trying to generate a signed url for Amazon's CloudFront service but am running into problems in that the openssl_get_privatekey function appears to be returning false and I can't quite figure out why. Here is the code (PHP) that I am using: $priv_key = file_get_contents(path_to_my_pem_file); $priv_keyid = openssl_get_privatekey($privkey); Unfortunately, everytime I try this openssl_get_privatekey fails silently and I run into errors when I try to sign with openssl_sign later on. I've tried printing out the contents of $priv_key after it has been read in and it appears to be correct. I'm running this on RHEL 5.4 using PHP 5.2.13. I've confirmed that file pem file is readable and I've also run dos2unix on it just in case (didn't work before or after). Any thoughts would be greatly appreciated as I am relatively new to both PHP and openssl.

    Read the article

  • How to select the top n from a union of two queries where the resulting order needs to be ranked by individual query?

    - by Jedidja
    Let's say I have a table with usernames: Id | Name ----------- 1 | Bobby 20 | Bob 90 | Bob 100 | Joe-Bob 630 | Bobberino 820 | Bob Junior I want to return a list of n matches on name for 'Bob' where the resulting set first contains exact matches followed by similar matches. I thought something like this might work SELECT TOP 4 a.* FROM ( SELECT * from Usernames WHERE Name = 'Bob' UNION SELECT * from Usernames WHERE Name LIKE '%Bob%' ) AS a but there are two problems: It's an inefficient query since the sub-select could return many rows (looking at the execution plan shows a join happening before top) (Almost) more importantly, the exact match(es) will not appear first in the results since the resulting set appears to be ordered by primary key. I am looking for a query that will return (for TOP 4) Id | Name --------- 20 | Bob 90 | Bob (and then 2 results from the LIKE query, e.g. 1 Bobby and 100 Joe-Bob) Is this possible in a single query?

    Read the article

  • PHP Server did not recognize the value of HTTP Header SOAPAction

    - by Joe
    I am making my first SOAPclient and I am stuck with the Headers, I am getting a response and when I look at my request it has a soap:body but no soap:headers. The web service has needs 3 parameters 1.UserName 2.Password 3.errorMessage This is the code I have set up. $SOAPAction = 'http://localhost/DriveAwayPriceCalculation/PriceCalculation'; //Namespace of the WS. // $SoapHeaders = array('User123' => $UserName, 'Password123' => $Password, '' => $errorMessage); $client = new nusoap_client("https://test.com/CalculationWS.asmx?WSDL", false, $UserName, $Password, $errorMessage); $headers = new SoapHeader('http://localhost/DriveAwayPriceCalculation/PriceCalculation', true, $SoapHeaders); As I said I am just starting out in SOAP (and PHP) so if you could help, it would be great. Thanks

    Read the article

  • ActiveReports Conditional Formatting - Picture Visibility

    - by Joe
    In ActiveReports, how can I change formatting based on values in the report data? Specifically, I want to show or hide pictures based on a value in the data. The report gets bound to a list of objects via a set to its DataSource property. These objects have a Condition property with values "Poor", "Normal", etc. I have some pictures in the report that correspond to the different conditions, and I want to hide all the pictures except for the one corresponding to the value. Should I subscribe to the Format event for the detail section? If so, how do I get to the "current record" data?

    Read the article

  • T-4 Templates for ASP.NET Web Form Databound Control Friendly Logical Layers

    - by Mohammad Ashraful Alam
    I just released an open source project at codeplex, which includes a set of T-4 templates that will enable you to build ASP.NET Web Form Data Bound controls friendly testable logical layer based on Entity Framework 4.0 with just few clicks! In this open source project you will get Entity Framework 4.0 based T-4 templates for following types of logical layers: Data Access Layer: Entity Framework 4.0 provides excellent ORM data access layer. It also includes support for T-4 templates, as built-in code generation strategy in Visual Studio 2010, where we can customize default structure of data access layer based on Entity Framework. default structure of data access layer has been enhanced to get support for mock testing in Entity Framework 4.0 object model. Business Logic Layer: ASP.NET web form based data bound control friendly business logic layer, which will enable you few clicks to build data bound web applications on top of ASP.NET Web Form and Entity Framework 4.0 quickly with great support of mock testing. Download it to make your web development productive. Enjoy!

    Read the article

  • Unit testing is… well, flawed.

    - by Dewald Galjaard
    Hey someone had to say it. I clearly recall my first IT job. I was appointed Systems Co-coordinator for a leading South African retailer at store level. Don’t get me wrong, there is absolutely nothing wrong with an honest day’s labor and in fact I highly recommend it, however I’m obliged to refer to the designation cautiously; in reality all I had to do was monitor in-store prices and two UNIX front line controllers. If anything went wrong – I only had to phone it in… Luckily that wasn’t all I did. My duties extended to some other interesting annual occurrence – stock take. Despite a bit more curious affair, it was still a tedious process that took weeks of preparation and several nights to complete.  Then also I remember that no matter how elaborate our planning was, the entire exercise would be rendered useless if we couldn’t get the basics right – that being the act of counting. Sounds simple right? We’ll with a store which could potentially carry over tens of thousands of different items… we’ll let’s just say I believe that’s when I first became a coffee addict. In those days the act of counting stock was a very humble process. Nothing like we have today. A staff member would be assigned a bin or shelve filled with items he or she had to sort then count. Thereafter they had to record their findings on a complementary piece of paper. Every night I would manage several teams. Each team was divided into two groups - counters and auditors. Both groups had the same task, only auditors followed shortly on the heels of the counters, recounting stock levels, making sure the original count correspond to their findings. It was a simple yet hugely responsible orchestration of people and thankfully there was one fundamental and golden rule I could always abide by to ensure things run smoothly – No-one was allowed to audit their own work. Nope, not even on nights when I didn’t have enough staff available. This meant I too at times had to get up there and get counting, or have the audit stand over until the next evening. The reason for this was obvious - late at night and with so much to do we were prone to make some mistakes, then on the recount, without a fresh set of eyes, you were likely to repeat the offence. Now years later this rule or guideline still holds true as we develop software (as far removed as software development from counting stock may be). For some reason it is a fundamental guideline we’re simply ignorant of. We write our code, we write our tests and thus commit the same horrendous offence. Yes, the procedure of writing unit tests as practiced in most development houses today – is flawed. Most if not all of the tests we write today exercise application logic – our logic. They are based on the way we believe an application or method should/may/will behave or function. As we write our tests, our unit tests mirror our best understanding of the inner workings of our application code. Unfortunately these tests will therefore also include (or be unaware of) any imperfections and errors on our part. If your logic is flawed as you write your initial code, chances are, without a fresh set of eyes, you will commit the same error second time around too. Not even experience seems to be a suitable solution. It certainly helps to have deeper insight, but is that really the answer we should be looking for? Is that really failsafe? What about code review? Code review is certainly an answer. You could have one developer coding away and another (or team) making sure the logic is sound. The practice however has its obvious drawbacks. Firstly and mainly it is resource intensive and from what I’ve seen in most development houses, given heavy deadlines, this guideline is seldom adhered to. Hardly ever do we have the resources, money or time readily available. So what other options are out there? A quest to find some solution revealed a project by Microsoft Research called PEX. PEX is a framework which creates several test scenarios for each method or class you write, automatically. Think of it as your own personal auditor. Within a few clicks the framework will auto generate several unit tests for a given class or method and save them to a single project. PEX help to audit your work. It lends a fresh set of eyes to any project you’re working on and best of all; it is cost effective and fast. Check them out at http://research.microsoft.com/en-us/projects/pex/ In upcoming posts we’ll dive deeper into how it works and how it can help you.   Certainly there are more similar frameworks out there and I would love to hear from you. Please share your experiences and insights.

    Read the article

  • Wandering CGAffineTransformMakeRotation

    - by Joe
    Okay this is about to make me insane -- any help would be appreciated. I have two images which are part of a timer application. One is the needle/hand and the other is a little hub which is styled to look like the needle base. I'm using a CGAffineTransformMakeRotation to rotate the needle and the base stays stationary. The problem: there is like a 1-2px 'wander' to the needle's rotation which makes it look like it's moving off center in relation to the base. I have worked the base and needle image over in PS extensively, and both are dead center pixel wise -- seriously. My method to rotate the hand: -(IBAction) rotateSteamArrow{ CGAffineTransform rotate = CGAffineTransformMakeRotation( degreesSteam / 180.0 * 3.14159265); degreesSteam = degreesSteam + 1.5; if (degreesSteam <= 180) { [steamNeedle setTransform:rotate]; } else { [self handleSteamTimer]; [self toggleButton:(id)timerButton]; [self switchSound]; } }

    Read the article

  • What does (Lua) game scripting mean?

    - by Gerenuk
    I've read that Lua is often used for embedded scripting and in particular game for scripting. I find it hard to picture how it is used exactly. Can you describe why and for which features and for which audience it is used? This questions isn't specifically addressing Lua, but rather any embedded scripting that serves a purpose similar to Lua scripting. Is it used for end-users to make custom adjustments? Is it used for game developers to speed up creation of game logic (levels, AI, ...)? Is it used to script game framework code since scripting can be faster? Basically I'm wondering how deep between plain configuration and framework logic such scripting usage goes. And how much scripting is done. A few configuration lines or a considerable amount?

    Read the article

  • How to return model state from child action handler in ASP.NET MVC

    - by Joe Future
    In my blog engine, I have one controller action that displays blog content, and in that view, I call Html.RenderAction(...) to render the "CreateComment" form. When a user posts a comment, the post is handled by the comment controller (not the blog controller). If the comment data is valid, I simply return a Redirect back to the blog page's URL. If the comment data is invalid (e.g. comment body is empty), I want to return the ViewData with the error information back to the blog controller and through the blog view to the CreateComment action/view so I can display which fields are bad. I have this working fine via AJAX when Javascript is enabled, but now I'm working on the case where Javascript might be disabled. If I return a RedirecToAction or Redirect from the comment controller, the model state information is lost. Any ideas?

    Read the article

  • Entity Framework 4 relationship management in POCO Templates - More lazy than FixupCollection?

    - by Joe Wood
    I've been taking a look at EF4 POCO templates in beta 2. The FixupCollection looks fine for maintaining the model correctness after updating the relationship collection property (i.e. product.Orders it would set the order.Product reference ). But what about support for handling the scenario when some of those Order objects are removed from the context? The use-case of maintaining cascading deletes in the in-memory model. The old Typed DataSet model used to do this by performing the query through the container to derive the relationship results. Like the DataSet, this would require a reference to the ObjectContext inside the entity class so that it could query the top-level Order collection. Better support for Separation of Concerns in the ObjectContext would be required. It looks like EF is not suited to this use-case that DataSets did out of the box.... am I right?

    Read the article

  • Row Number for group by ?

    - by Damien Joe
    I have table structure like Category EmpName 1 Harry 1 John 1 Ford 2 James 2 Mark 2 Shane 3 Oliver 3 Ted I want results like Category EmpName RowNumber 1 Harry 1 1 John 2 1 Ford 3 2 James 1 2 Mark 2 2 Shane 3 3 Oliver 1 3 Ted 2 I am using db2 and row_number() is not working for different groups of records.

    Read the article

  • Using XSD to validate node count

    - by heath
    I don't think this is possible but I thought I'd throw it out there. Given this XML: <people count="3"> <person>Bill</person> <person>Joe</person> <person>Susan</person> </people> Is it possible in an XSD to force the @count attribute value to be the correct count of defined elements (in this case, the person element)? The above example would obviously be correct and the below example would not validate: <people count="5"> <person>Bill</person> <person>Joe</person> <person>Susan</person> </people>

    Read the article

  • Guides for PostgreSQL query tuning?

    - by Joe
    I've found a number of resources that talk about tuning the database server, but I haven't found much on the tuning of the individual queries. For instance, in Oracle, I might try adding hints to ignore indexes or to use sort-merge vs. correlated joins, but I can't find much on tuning Postgres other than using explicit joins and recommendations when bulk loading tables. Do any such guides exist so I can focus on tuning the most run and/or underperforming queries, hopefully without adversely affecting the currently well-performing queries? I'd even be happy to find something that compared how certain types of queries performed relative to other databases, so I had a better clue of what sort of things to avoid. update: I should've mentioned, I took all of the Oracle DBA classes along with their data modeling and SQL tuning classes back in the 8i days ... so I know about 'EXPLAIN', but that's more to tell you what's going wrong with the query, not necessarily how to make it better. (eg, are 'while var=1 or var=2' and 'while var in (1,2)' considered the same when generating an execution plan? What if I'm doing it with 10 permutations? When are multi-column indexes used? Are there ways to get the planner to optimize for fastest start vs. fastest finish? What sort of 'gotchas' might I run into when moving from mySQL, Oracle or some other RDBMS?) I could write any complex query dozens if not hundreds of ways, and I'm hoping to not have to try them all and find which one works best through trial and error. I've already found that 'SELECT count(*)' won't use an index, but 'SELECT count(primary_key)' will ... maybe a 'PostgreSQL for experienced SQL users' sort of document that explained sorts of queries to avoid, and how best to re-write them, or how to get the planner to handle them better. update 2: I found a Comparison of different SQL Implementations which covers PostgreSQL, DB2, MS-SQL, mySQL, Oracle and Informix, and explains if, how, and gotchas on things you might try to do, and his references section linked to Oracle / SQL Server / DB2 / Mckoi /MySQL Database Equivalents (which is what its title suggests) and to the wikibook SQL Dialects Reference which covers whatever people contribute (includes some DB2, SQLite, mySQL, PostgreSQL, Firebird, Vituoso, Oracle, MS-SQL, Ingres, and Linter).

    Read the article

  • Investigation: Can different combinations of components effect Dataflow performance?

    - by jamiet
    Introduction The Dataflow task is one of the core components (if not the core component) of SQL Server Integration Services (SSIS) and often the most misunderstood. This is not surprising, its an incredibly complicated beast and we’re abstracted away from that complexity via some boxes that go yellow red or green and that have some lines drawn between them. Example dataflow In this blog post I intend to look under that facade and get into some of the nuts and bolts of the Dataflow Task by investigating how the decisions we make when building our packages can affect performance. I will do this by comparing the performance of three dataflows that all have the same input, all produce the same output, but which all operate slightly differently by way of having different transformation components. I also want to use this blog post to challenge a common held opinion that I see perpetuated over and over again on the SSIS forum. That is, that people assume adding components to a dataflow will be detrimental to overall performance. Its not surprising that people think this –it is intuitive to think that more components means more work- however this is not a view that I share. I have always been of the opinion that there are many factors affecting dataflow duration and the number of components is actually one of the less important ones; having said that I have never proven that assertion and that is one reason for this investigation. I have actually seen evidence that some people think dataflow duration is simply a function of number of rows and number of components. I’ll happily call that one out as a myth even without any investigation!  The Setup I have a 2GB datafile which is a list of 4731904 (~4.7million) customer records with various attributes against them and it contains 2 columns that I am going to use for categorisation: [YearlyIncome] [BirthDate] The data file is a SSIS raw format file which I chose to use because it is the quickest way of getting data into a dataflow and given that I am testing the transformations, not the source or destination adapters, I want to minimise external influences as much as possible. In the test I will split the customers according to month of birth (12 of those) and whether or not their yearly income is above or below 50000 (2 of those); in other words I will be splitting them into 24 discrete categories and in order to do it I shall be using different combinations of SSIS’ Conditional Split and Derived Column transformation components. The 24 datapaths that occur will each input to a rowcount component, again because this is the least resource intensive means of terminating a datapath. The test is being carried out on a Dell XPS Studio laptop with a quad core (8 logical Procs) Intel Core i7 at 1.73GHz and Samsung SSD hard drive. Its running SQL Server 2008 R2 on Windows 7. The Variables Here are the three combinations of components that I am going to test:     One Conditional Split - A single Conditional Split component CSPL Split by Month of Birth and income category that will use expressions on [YearlyIncome] & [BirthDate] to send each row to one of 24 outputs. This next screenshot displays the expression logic in use: Derived Column & Conditional Split - A Derived Column component DER Income Category that adds a new column [IncomeCategory] which will contain one of two possible text values {“LessThan50000”,”GreaterThan50000”} and uses [YearlyIncome] to determine which value each row should get. A Conditional Split component CSPL Split by Month of Birth and Income Category then uses that new column in conjunction with [BirthDate] to determine which of the same 24 outputs to send each row to. Put more simply, I am separating the Conditional Split of #1 into a Derived Column and a Conditional Split. The next screenshots display the expression logic in use: DER Income Category         CSPL Split by Month of Birth and Income Category       Three Conditional Splits - A Conditional Split component that produces two outputs based on [YearlyIncome], one for each Income Category. Each of those outputs will go to a further Conditional Split that splits the input into 12 outputs, one for each month of birth (identical logic in each). In this case then I am separating the single Conditional Split of #1 into three Conditional Split components. The next screenshots display the expression logic in use: CSPL Split by Income Category         CSPL Split by Month of Birth 1& 2       Each of these combinations will provide an input to one of the 24 rowcount components, just the same as before. For illustration here is a screenshot of the dataflow containing three Conditional Split components: As you can these dataflows have a fair bit of work to do and remember that they’re doing that work for 4.7million rows. I will execute each dataflow 10 times and use the average for comparison. I foresee three possible outcomes: The dataflow containing just one Conditional Split (i.e. #1) will be quicker There is no significant difference between any of them One of the two dataflows containing multiple transformation components will be quicker Regardless of which of those outcomes come to pass we will have learnt something and that makes this an interesting test to carry out. Note that I will be executing the dataflows using dtexec.exe rather than hitting F5 within BIDS. The Results and Analysis The table below shows all of the executions, 10 for each dataflow. It also shows the average for each along with a standard deviation. All durations are in seconds. I’m pasting a screenshot because I frankly can’t be bothered with the faffing about needed to make a presentable HTML table. It is plain to see from the average that the dataflow containing three conditional splits is significantly faster, the other two taking 43% and 52% longer respectively. This seems strange though, right? Why does the dataflow containing the most components outperform the other two by such a big margin? The answer is actually quite logical when you put some thought into it and I’ll explain that below. Before progressing, a side note. The standard deviation for the “Three Conditional Splits” dataflow is orders of magnitude smaller – indicating that performance for this dataflow can be predicted with much greater confidence too. The Explanation I refer you to the screenshot above that shows how CSPL Split by Month of Birth and salary category in the first dataflow is setup. Observe that there is a case for each combination of Month Of Date and Income Category – 24 in total. These expressions get evaluated in the order that they appear and hence if we assume that Month of Date and Income Category are uniformly distributed in the dataset we can deduce that the expected number of expression evaluations for each row is 12.5 i.e. 1 (the minimum) + 24 (the maximum) divided by 2 = 12.5. Now take a look at the screenshots for the second dataflow. We are doing one expression evaluation in DER Income Category and we have the same 24 cases in CSPL Split by Month of Birth and Income Category as we had before, only the expression differs slightly. In this case then we have 1 + 12.5 = 13.5 expected evaluations for each row – that would account for the slightly longer average execution time for this dataflow. Now onto the third dataflow, the quick one. CSPL Split by Income Category does a maximum of 2 expression evaluations thus the expected number of evaluations per row is 1.5. CSPL Split by Month of Birth 1 & CSPL Split by Month of Birth 2 both have less work to do than the previous Conditional Split components because they only have 12 cases to test for thus the expected number of expression evaluations is 6.5 There are two of them so total expected number of expression evaluations for this dataflow is 6.5 + 6.5 + 1.5 = 14.5. 14.5 is still more than 12.5 & 13.5 though so why is the third dataflow so much quicker? Simple, the conditional expressions in the first two dataflows have two boolean predicates to evaluate – one for Income Category and one for Month of Birth; the expressions in the Conditional Split in the third dataflow however only have one predicate thus they are doing a lot less work. To sum up, the difference in execution times can be attributed to the difference between: MONTH(BirthDate) == 1 && YearlyIncome <= 50000 and MONTH(BirthDate) == 1 In the first two dataflows YearlyIncome <= 50000 gets evaluated an average of 12.5 times for every row whereas in the third dataflow it is evaluated once and once only. Multiply those 11.5 extra operations by 4.7million rows and you get a significant amount of extra CPU cycles – that’s where our duration difference comes from. The Wrap-up The obvious point here is that adding new components to a dataflow isn’t necessarily going to make it go any slower, moreover you may be able to achieve significant improvements by splitting logic over multiple components rather than one. Performance tuning is all about reducing the amount of work that needs to be done and that doesn’t necessarily mean use less components, indeed sometimes you may be able to reduce workload in ways that aren’t immediately obvious as I think I have proven here. Of course there are many variables in play here and your mileage will most definitely vary. I encourage you to download the package and see if you get similar results – let me know in the comments. The package contains all three dataflows plus a fourth dataflow that will create the 2GB raw file for you (you will also need the [AdventureWorksDW2008] sample database from which to source the data); simply disable all dataflows except the one you want to test before executing the package and remember, execute using dtexec, not within BIDS. If you want to explore dataflow performance tuning in more detail then here are some links you might want to check out: Inequality joins, Asynchronous transformations and Lookups Destination Adapter Comparison Don’t turn the dataflow into a cursor SSIS Dataflow – Designing for performance (webinar) Any comments? Let me know! @Jamiet

    Read the article

  • Cookies not present after using XMLHttpRequest

    - by Joe B
    I'm trying to make a bookmarklet to download videos off of YouTube, but I've come across a little problem. To detect the highest quality video available, I use a sort of brute force method, in which I make requests using the XMLHttpRequest object until a 404 isn't returned (I can't do it until a 200 ok is returned because YouTube redirects to a different server if the video is available, and the cross-domain policy won't allow me to access any of that data). Once a working URL is found, I simply set window.location to the URL and the download should start, right? Wrong. A request is made, but for reasons unknown to me, the cookies are stripped and YouTube returns a 403 access denied. This does not happen if the XML requests aren't made before it, i.e. if I just set the window.location to the URL everything works fine, it's when I do the XMLHttpRequest that the cookies aren't sent. It's hard to explain so here's the script: var formats = ["37", "22", "35", "34", "18", ""]; var url = "/get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) + "&fmt="; for (var i = 0; i < formats.length; i++) { xmlhttp = new XMLHttpRequest; xmlhttp.open("HEAD", url + formats[i], false); xmlhttp.send(null); if (xmlhttp.status != 404) { document.location = url + formats[i]; break } } That script does not send the cookies after setting the document.location and thus does not work. However, simply doing this: document.location = /get_video?video_id=" + yt.getConfig('SWF_ARGS')['video_id'] + "&t=" + (unescape(yt.getConfig('SWF_ARGS')['t'])) DOES send the cookies along with the request, and does work. The only downside is I can't automatically detect the highest quality, I just have to try every "fmt" parameter manually until I get it right. So my question is: why is the XMLHttpRequest object removing cookies from subsequent requests? This is the first time I've ever done anything in JS by the way, so please, go easy on me. ;)

    Read the article

  • Access crosstab formula field in another crosstab column?

    - by Damien Joe
    How to access crosstab formula field in another column? I have report like with Dues & total both formula fields: Amount Dues(Done by a Formula) Total (Done by a Formula) ------ ------------------------- --------------------------- 500 20 % someAmount Formula for Dues: WhileReadingRecords; numberVar due:={Command.SomeField)/100; due Formula for Total: WhileReadingRecords; numberVar total:= {Command.Amount} - due; total How do I access due field inside the second formula for each row of record?

    Read the article

  • Visualize flowchart diagram with multiple end symbols

    - by platzhirsch
    I am looking for a standardize way to visualize the following hierarchical logic: The state of the thread is represented by the answers to the hierarchical set of question You can read this listing like a flowchart, you iterate over the questions decide, go one step deeper and so on. Therefore I thought the best way to visualize it, using a flowchart. The problem is, in this hierarchical set it is possible to end in more than one state and its totally valid. I have never seen a flowchart where you can enter more than one state. Is it still possible and I am missing the right symbol to present this logic or are flowchart not fitting anyway? What other graphical representation could I use, is there something fitting in UML? A non-deterministic state machine seems not to be intuitive enough, transfering it into a deterministic state machine would result in to many states, and so on.

    Read the article

  • SQL SERVER – Monday Morning Puzzle – Query Returns Results Sometimes but Not Always

    - by pinaldave
    The amount of email I receive sometime it is impossible for me to answer every email. Nonetheless I try to answer pretty much every email I receive. However, quite often I receive such questions in email that I have no answer to them because either emails are not complete or they are out of my domain expertise. In recent times I received one email which had only one or two lines but indeed attracted my attention to it. The question was bit vague but it indeed made me think. The answer was not straightforward so I had to keep on writing the answer as I remember it. However, after writing the answer I do not feel satisfied. Let me put this question in front of you and see if we all can come up with a comprehensive answer. Question: I am beginner with SQL Server. I have one query, it sometime returns a result and sometime it does not return me the result. Where should I start looking for a solution and what kind of information I should send to you so you can help me with solving. I have no clue, please guide me. Well, if you read the question, it is indeed incomplete and it does not contain much of the information at all. I decided to help him and here is the answer, which I started to compose. Answer: As there are not much information in the original question, I am not confident what will solve your problem. However, here are the few things which you can try to look at and see if that solves your problem. Check parameter which is passed to the query. Is the parameter changing at various executions? Check connection string – is there some kind of logic around it? Do you have a non-deterministic component in your query logic? (In other words – does your result is based on current date time or any other time based function?) Are you facing time out while running your query? Is there any error in error log? What is the business logic in your query? Do you have all the valid permissions to all the objects used in the query? Are permissions changing or query accessing a different object in various executions? (Add your suggestions here) Meanwhile, have you ever faced this situation? If yes, do share your experience in the comment area. I will send a copy of my book SQL Server Interview Questions and Answers to one of the most interesting comment. The winner will be announced by next Monday.  Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Interview Questions and Answers, SQL Puzzle, SQL Query, SQL Server, SQL Tips and Tricks, T SQL, Technology

    Read the article

  • Where to find changes from iphone OS 3.0 to 3.1.3

    - by Joe
    I've developed an app using the 3.1.3 SDK and I want to set the deployment target to OS 3.0. The app uses iPod functionality which I can't test on the Simulator so my question is: Is there somewhere I can find a list of the changes from 3.0 to 3.1.3 so I can check if anything might be broken on an OS 3.0 device? I've looked on Apple's website obviously but can't find anything. How do people normally test on old software releases?

    Read the article

  • HQL to get elements that possess all items in a set

    - by Tauren
    Currently, I have an HQL query that returns all Members who possess ANY Award from a set of specified Awards: from Member m left join m.awards as a where a.name in ("Trophy","Ribbon"); What I now need is HQL that will return all Members who possess ALL Awards specified in the set of Awards. So, assuming this data: Joe has Trophy, Medal Sue has Trophy, Ribbon Tom has Trophy, Ribbon, Medal The query above would return Joe, Sue, and Tom because all three possess at least one of Trophy or Ribbon. But I need to return only Sue and Tom, because they are the only ones who possess all of the specified awards (Trophy and Ribbon). Here's the class structure (simplified): class Member { private String name; private Set<Award> awards; } class Award { private String name; }

    Read the article

< Previous Page | 76 77 78 79 80 81 82 83 84 85 86 87  | Next Page >