Search Results

Search found 114 results on 5 pages for 'terry gamble'.

Page 5/5 | < Previous Page | 1 2 3 4 5 

  • Guessing Excel Data Types

    - by AjarnMark
    Note to Self HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Jet\4.0\Engines\Excel: TypeGuessRows = 0 means scan everything. Note to Others About 10 years ago I stumbled across this bit of information just when I needed it and it saved my project.  Then for some reason, a few years later when it would have been nice, but not critical, for some reason I could not find it again anywhere.  Well, now I have stumbled across it again, and to preserve my future self from nightmares and sudden baldness due to pulling my hair out, I have decided to blog it in the hopes that I can find it again this way. Here’s the story…  When you query data from an Excel spreadsheet, such as with old-fashioned DTS packages in SQL 2000 (my first reference) or simply with an OLEDB Data Adapter from ASP.NET (recent task) and if you are using the Microsoft Jet 4.0 driver (newer ones may deal with this differently) then you can get funny results where the query reports back that a cell value is null even when you know it contains data. What happens is that Excel doesn’t really have data types.  While you can format information in cells to appear like certain data types (e.g. Date, Time, Decimal, Text, etc.) that is not really defining the cell as being of a certain type like we think of when working with databases.  But, presumably, to make things more convenient for the user (programmer) when you issue a query against Excel, the query processor tries to guess what type of data is contained in each column and returns it in an appropriate manner.  This is all well and good IF your data is consistent in every row and matches what the processor guessed.  And, for efficiency’s sake, when the query processor is trying to figure out each column’s data type, it does so by analyzing only the first 8 rows of data (default setting). Now here’s the problem, suppose that your spreadsheet contains information about clothing, and one of the columns is Size.  Now suppose that in the first 8 rows, all of your sizes look like 32, 34, 18, 10, and so on, using numbers, but then, somewhere after the 8th row, you have some rows with sizes like S, M, L, XL.  What happens is that by examining only the first 8 rows, the query processor inferred that the column contained numerical data, and then when it hits the non-numerical data in later rows, it comes back blank.  Major bummer, and a real pain to track down if you don’t know that Excel is doing this, because you study the spreadsheet and say, “the data is RIGHT THERE!  WHY doesn’t the query see it?!?!”  And the hair-pulling begins. So, what’s a developer to do?  One option is to go to the registry setting noted above and change the DWORD value of TypeGuessRows from the default of 8 to 0 (zero).  Setting this value to zero will force Jet to scan every row in the spreadsheet before making its determination as to what type of data the column contains.  And that means that in the example above, it would have treated the column as a string rather than as numeric, and presto! your query now returns all of the values that you know are in there. Of course, there is a caveat… if you are querying large spreadsheets, making Jet scan every row can be quite a performance hit.  You could enter a different number (more than 8) that you believe is a better sampling of rows to make the guess, but you still have the possibility that every row scanned looks alike, but that later rows are different, and that you might get blanks when there really is data there.  That’s the type of gamble, I really don’t like to take with my data. Anyone with a better approach, or with experience with more recent drivers that have a better way of handling data types, please chime in!

    Read the article

  • Show me your Linq to SQL architectures!

    - by Brad Heller
    I've been using Linq to SQL for a new implementation that I've been working on. I have about 5000 lines of code and am a little ways from a solid demo. I've been pretty satisfied with Linq to SQL so far -- the tools are excellent and pretty painless and it allows you to get a DAL up and running quickly. That said, there are some major draw backs that I just keep hitting over and over again. Namely how to handle separation of concerns between my DAL and my business layer and juggling that with different data contexts. Here is the architecture I've been using: My repositories do all my data access and they return Linq to SQL objects. Each of my Linq to SQL objects implements an IDetachable interface. A typical implementation looks like this: partial class PaymentDetail : IDetachable { #region IDetachable Members public bool IsAttached { get { return PropertyChanging != null; } } public void Detach() { if (IsAttached) { PropertyChanged = null; PropertyChanging = null; Transaction.Detach(); } } #endregion } Every time I do a DAL operation in my repository I "detach" when I'm done with the object (and it should theoretically detach from any child objects) to remove the DataContext's context. Like I said, this works pretty well, but there are some edge cases that seem to be a big pain in the ass. For instance, my Transaction object has many PaymentDetails. Even when there are no PaymentDetails in that collection it's still attached to the DataContext's context! Thus, if I try to update (I update by Attach()ing to the object and then SubmitChanges()) I get that dreaded "An attempt has been made to Attach or Add an entity that is not new, perhaps having been loaded from another DataContext. This is not supported." message. Anyway, I'm starting to doubt that this technology was a good gamble. Has anyone got a decent architecture that they're willing to share? I'd really love to use this technology but I feel like I spend 1/3 of my time just debugging is retarded quirks!

    Read the article

  • How can I tackle 'profoundly found elsewhere' syndrome (inverse of NIH)?

    - by Alistair Knock
    How can I encourage colleagues to embrace small-scale innovation within our team(s), in order to get things done quicker and to encourage skills development? (the term 'profoundly found elsewhere' comes from Wikipedia, although it is scarcely used anywhere else apart from a reference to Proctor & Gamble) I've worked in both environments where there is a strong opposition to software which hasn't been developed in-house (usually because there's a large community of developers), and more recently (with far fewer central developers) where off-the-shelf products are far more favoured for the usual reasons: maintenance, total cost over product lifecycle, risk management and so on. I think the off the shelf argument works in the majority of cases for the majority of users, even though as a developer the product never quite does what I'd like it to do. However, in some cases there are clear gaps where the market isn't able to provide specifically what we would need, or at least it isn't able to without charging astronomical consultancy rates for a bespoke solution. These can be small web applications which provide a short-term solution to a particular need in one specific department, or could be larger developments that have the potential to serve a wider audience, both across the organisation and into external markets. The problem is that while development of these applications would be incredibly cheap in terms of developer hours, and delivered very quickly without the need for glacial consultation, the proposal usually falls flat because of risk: 'Who'll maintain the project tracker that hasn't had any maintenance for the past 7 years while you're on holiday for 2 weeks?' 'What if one of our systems changes and the connector breaks?' 'How can you guarantee it's secure/better/faster/cheaper/holier than Company X's?' With one developer behind these little projects, the answers are invariably: 'Nobody, but...' 'It will break, just like any other application would...' 'I, uh...' How can I better answer these questions and encourage people to take a little risk in order to stimulate creativity and fast-paced, short-lifecycle development instead of using that 6 months to consult about what tender process we might use?

    Read the article

  • Inequality joins, Asynchronous transformations and Lookups : SSIS

    - by jamiet
    It is pretty much accepted by SQL Server Integration Services (SSIS) developers that synchronous transformations are generally quicker than asynchronous transformations (for a description of synchronous and asynchronous transformations go read Asynchronous and synchronous data flow components). Notice I said “generally” and not “always”; there are circumstances where using asynchronous transformations can be beneficial and in this blog post I’ll demonstrate such a scenario, one that is pretty common when building data warehouses. Imagine I have a [Customer] dimension table that manages information about all of my customers as a slowly-changing dimension. If that is a type 2 slowly changing dimension then you will likely have multiple rows per customer in that table. Furthermore you might also have datetime fields that indicate the effective time period of each member record. Here is such a table that contains data for four dimension members {Terry, Max, Henry, Horace}: Notice that we have multiple records per customer and that the [SCDStartDate] of a record is equivalent to the [SCDEndDate] of the record that preceded it (if there was one). (Note that I am on record as saying I am not a fan of this technique of storing an [SCDEndDate] but for the purposes of clarity I have included it here.) Anyway, the idea here is that we will have some incoming data containing [CustomerName] & [EffectiveDate] and we need to use those values to lookup [Customer].[CustomerId]. The logic will be: Lookup a [CustomerId] WHERE [CustomerName]=[CustomerName] AND [SCDStartDate] <= [EffectiveDate] AND [EffectiveDate] <= [SCDEndDate] The conventional approach to this would be to use a full cached lookup but that isn’t an option here because we are using inequality conditions. The obvious next step then is to use a non-cached lookup which enables us to change the SQL statement to use inequality operators: Let’s take a look at the dataflow: Notice these are all synchronous components. This approach works just fine however it does have the limitation that it has to issue a SQL statement against your lookup set for every row thus we can expect the execution time of our dataflow to increase linearly in line with the number of rows in our dataflow; that’s not good. OK, that’s the obvious method. Let’s now look at a different way of achieving this using an asynchronous Merge Join transform coupled with a Conditional Split. I’ve shown it post-execution so that I can include the row counts which help to illustrate what is going on here: Notice that there are more rows output from our Merge Join component than on the input. That is because we are joining on [CustomerName] and, as we know, we have multiple records per [CustomerName] in our lookup set. Notice also that there are two asynchronous components in here (the Sort and the Merge Join). I have embedded a video below that compares the execution times for each of these two methods. The video is just over 8minutes long. View on Vimeo  For those that can’t be bothered watching the video I’ll tell you the results here. The dataflow that used the Lookup transform took 36 seconds whereas the dataflow that used the Merge Join took less than two seconds. An illustration in case it is needed: Pretty conclusive proof that in some scenarios it may be quicker to use an asynchronous component than a synchronous one. Your mileage may of course vary. The scenario outlined here is analogous to performance tuning procedural SQL that uses cursors. It is common to eliminate cursors by converting them to set-based operations and that is effectively what we have done here. Our non-cached lookup is performing a discrete operation for every single row of data, exactly like a cursor does. By eliminating this cursor-in-disguise we have dramatically sped up our dataflow. I hope all of that proves useful. You can download the package that I demonstrated in the video from my SkyDrive at http://cid-550f681dad532637.skydrive.live.com/self.aspx/Public/BlogShare/20100514/20100514%20Lookups%20and%20Merge%20Joins.zip Comments are welcome as always. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • WebCenter Customer Spotlight: Texas Industries, Inc.

    - by me
    Author: Peter Reiser - Social Business Evangelist, Oracle WebCenter  Solution SummaryTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. TXI is based in Dallas and employs around 2,000 employees. The customer had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. TXI implemented a centralized solution leveraging Oracle WebCenter Imaging, a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and Oracle WebCenter Forms Recognition to send  the invoices through to Oracle Financials for approvals and processing.  TXI significantly lowered resource needs for payable processing,  increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average. Company OverviewTexas Industries, Inc. (TXI) is a leading supplier of cement, aggregate, and consumer product building materials for residential, commercial, and public works projects. With operating subsidiaries in six states, TXI is the largest producer of cement in Texas and a major producer in California. TXI is a major supplier of stone, sand, gravel, and expanded shale and clay products, and one of the largest producers of bagged cement and concrete  products in the Southwest. Business ChallengesTXI had the challenge of decentralized and manual processes for entering 180,000 vendor invoices annually.  Invoice entry was a time- and resource-intensive process that entailed significant personnel requirements. Their business objectives were: Increase the efficiency of core business processes, such as invoice processing, to support the organization’s desire to maintain its role as the Southwest’s leader in delivering high-quality, low-cost products to the construction industry Meet the audit and regulatory requirements for achieving Sarbanes-Oxley (SOX) compliance Streamline entry of 180,000 invoices annually to accelerate processing, reduce errors, cut invoice storage and routing costs, and increase visibility into payables liabilities Solution DeployedTXI replaced a resource-intensive, paper-based, decentralized process for invoice entry with a centralized solution leveraging Oracle WebCenter Imaging 11g. They worked with the Oracle Partner Keste LLC to develop a smart routing solution that enables users to capture invoices electronically with Oracle WebCenter Capture and then uses Oracle WebCenter Forms Recognition and the Oracle WebCenter Imaging workflow to send the invoices through to Oracle Financials for approvals and processing. Business Results Significantly lowered resource needs for payable processing through centralization and improved efficiency  Enabled the company to process invoices faster and pay bills earlier, allowing it to take advantage of additional vendor discounts Tracked to increase productivity by 80% and reduce invoice processing cycle times by 84%—from 20 to 30 days to just 3 to 5 days, on average Achieved a 25% reduction in paper invoice storage costs now that invoices are captured digitally, and enabled a 50% reduction in shipping costs, as the company no longer has to send paper invoices between headquarters and production facilities for approvals “Entering and manually processing more than 180,000 vendor invoices annually was time and labor intensive. With Oracle Imaging and Process Management, we have automated and centralized invoice entry and processing at our corporate office, improving productivity by 80% and reducing invoice processing cycle times by 84%—a very important efficiency gain.” Terry Marshall, Vice President of Information Services, Texas Industries, Inc. Additional Information TXI Customer Snapshot Oracle WebCenter Content Oracle WebCenter Capture Oracle WebCenter Forms Recognition

    Read the article

  • signalR groups - connecting/disconnecting and sending - am I missing something?

    - by Terry_Brown
    very new to signalR, and have rolled up a very simple app that will take questions for moderation at conferences (felt like a straight forward use case) I have 2 hubs at the moment: - Question (for asking questions) - Speaker (these should receive questions and allow moderation, but that will come later) Solution lives at https://github.com/terrybrown/InterASK After watching a video (by David Fowler/Damian Edwards) (http://channel9.msdn.com/Shows/Web+Camps+TV/Damian-Edwards-and-David-Fowler-Demonstrate-SignalR) and another that I can't find the URL for atm, I thought I'd go with 'groups' as the concept to keep messages flowing to the right people. I implemented IConnected, IDisconnect as I'd seen in one of the videos, and upon debugging I can see Connect fire (and on reload I can see disconnect fire), but it seems nothing I do adds a person to a group. The signalR documentation suggests "Groups are not persisted on the server so applications are responsible for keeping track of what connections are in what groups so things like group count can be achieved" which I guess is telling me that I need to keep some method (static or otherwise?) of tracking who is in a group? Certainly I don't seem able to send to groups currently, though I have no problem distributing to anyone currently connected to the app and implementing the same JS method (2 machines on the same page). I suspect I'm just missing something - I read a few of the other questions on here, but none of them seem to mention IConnected/IDisconnect, which tells me these are either new (and nobody is using them) or that they're old (and nobody is using them). I know this could be considered a subjective question, though what I'm looking for is just a simple means of managing the groups so that I can do what I want to - send a question from one hub, and have people connected to a different hub receive it - groups felt the cleanest solution for this? Many thanks folks. Terry

    Read the article

  • Problem Displaying XML in Grid View-newbie

    - by Dean
    I am trying to do something in VisualWebDev 2008 Express that I thought would be simple, but it is not working. I want to display data from an XML file so I added the XMLDataSource to my page, pointed it to the XML file, and then added the GridView and connected it to the datasource. I am getting the following error: GridView - GridView1There was an error rendering the control. The data source for GridView with id 'GridView1' did not have any properties or attributes from which to generate columns. Ensure that your data source has content. Could someone please tell me what I might be doing wrong, TIA Dean A smippet from my XML is as follows: 6019 - Renaissance MS - New School Renaissance MS 7155 Hall Road Fairburn, GA 30213 NS-6019200-LA-01 New School Close-out NS-6019200 0.000000000000000e+000 The construction of the new Renaissance MS will be at the intersection of Jones/Hall Road, in the districts 7th & 9F and Land Lots 117, 143 & 146 of Fulton County, GA. The work includes the construction of the 180,500 square foot building that will house 34 standard classrooms, 12 standard science labs, 20 special purpose classrooms, cafeteria and litchen, gymnasium, media center and administrative offices. The site will also have multi-purpose playfields with track, softball field, tennis courts and basketball/volleyball court. Terry O'Brien Parsons Stevens Wilkinson Stang Newdow Barton Malow -84.62242 33.61497

    Read the article

  • Should I learn to code?

    - by saltcod
    Hi All, This is more of a philosophical question than a technical one, but I’d like some opinions on it, and I think that there are many others in my position that would benefit. My issue is that I don’t really have time to learn how to code. I know, I know… no one has time anymore, but please hear me out. Since learning to use Drupal about 2 years ago I’ve been involved with several projects wherein I’ve become the default quasi-developer, front-end designer, site manager, and system administrator. What I’ve found is that I can produce fairly nice, feature rich Drupal sites with the wealth of contrib. modules out there (Views, CCK, image handling, etc….). BUT! I can’t code. I know enough PHP to insert something into a block, or re-word a string, but that’s about it. I still don’t really even know how arrays work. My question Succinctly, my question is: Given the time that I have available for all of this stuff – in addition to a full-time job and regular life – am I better off trying to become more expert at the front-end stuff, or should I just learn PHP already? Pros 1. If a project doesn’t use Drupal, I’ll know enough PHP to be able to participate. 2. Learning PHP would help my Drupal development too 3. Learning PHP would make front-end theming easier 4. Learning PHP should give me that missing background in programming – and should allow me to learn other languages in the future Cons 1. At 28, I know I’m not too old to learn anything. But am I too old to become ‘good’? 2. Am I better off getting better and better at front-end UX work? 3. Am I better off farming out the PHP work? Suggestions from coders welcome! Thanks Terry

    Read the article

  • JSON is not being recognised

    - by richzilla
    Hi All, Im having a bit of trouble getting my JSON to be recognised by my web page. I have validated JSON that im getting returned from server, so i know that is correct, however my javascript function is not doing anything with it. My succes function is as follows: success: function(data) { $('input[name=customer_name]').val(data.name); $('textarea[name=customer_address]').text(data.address); $('input[name=customer_email]').val(data.email); $('input[name=customer_tel]').val(data.tel); $('input[name=user_id]').val(item.id); } Yet the fields are not being repopulated with the data that is returned, if it helps, a sample of my JSON data: { "name": "Terry O'Toole", "address": "Terrys House\nTerry Street\nTerrysville\nTerrytown\nTT1 6TT", "email": "[email protected]", "tel": "05110000000" } Any help would be appreciated. [EDIT] Expanded ajax call: $.ajax({ url: "<?php echo site_url('user/users/ajax'); ?>", type: 'POST', data: {"userid": item.id}, success: function(data) { $('input[name=customer_name]').val(data.name); $('textarea[name=customer_address]').text(data.address); $('input[name=customer_email]').val(data.email); $('input[name=customer_tel]').val(data.tel); $('input[name=user_id]').val(item.id); } }) });

    Read the article

  • Use your own domain email and tired of SPAM? SPAMfighter FTW

    - by Dave Campbell
    I wouldn't post this if I hadn't tried it... and I paid for it myself, so don't anybody be thinking I'm reviewing something someone sent me! Long ago and far away I got very tired of local ISPs and 2nd phone lines and took the plunge and got hooked up to cable... yeah I know the 2nd phone line concept may be hard for everyone to understand, but that's how it was in 'the old days'. To avoid having to change email addresses all the time, I decided to buy a domain name, get minimal hosting, and use that for all email into the house. That way if I changed providers, all the email addresses wouldn't have to change. Of course, about a dozen domains later, I have LOTS of pop email addresses and even an exchange address to my client's server... times have changed. What also has changed is the fact that we get SPAM... 'back in the day' when I was a beta tester for the first ISP in Phoenix, someone tried sending an ad to all of us, and what he got in return for his trouble was a bunch of core dumps that locked up his email... if you don't know what a core dump is, ask your grandfather. But in today's world, we're all much more civilized than that, and as with many things, the criminals seem to have much more rights than we do, so we get inundated with email offering all sorts of wild schemes that you'd have to be brain-dead to accept, but yet... if people weren't accepting them, they'd stop sending them. I keep hoping that survival of the smartest would weed out the mental midgets that respond and then the jumk email stop, but that hasn't happened yet anymore than finding high-quality hearing aids at the checkout line of Safeway because of all the dimwits playing music too loud inside their car... but that's another whole topic and I digress. So what's the solution for all the spam? And I mean *all*... on that old personal email address, I am now getting over 150 spam messages a day! Yes I know that's why God invented the delete key, but I took it on as a challenge, and it's a matter of principle... why should I switch email addresses, or convert from [email protected] to something else, or have all my email filtered through some service just because some A-Hole somewhere has a site up trying to phish Ma & Pa Kettle (ask your grandfather about that too) out of their retirement money? Well... I got an email from my cousin the other day while I was writing yet another email rule, and there was a banner on the bottom of his email that said he was protected by SPAMfighter. SPAMfighter huh.... so I took a look at their site, and found yet one more of the supposed tools to help us. But... I read that they're a Microsoft Gold Partner... and that doesn't come lightly... so I took a gamble and here's what I found: I installed it, and had to do a couple things: 1) SPAMfighter stuffed the SPAMfighter folder into my client's exchange address... I deleted it, made a new SPAMfighter folder where I wanted it to go, then in the SPAMfighter Clients settings for Outlook, I told it to put all spam there. 2) It didn't seem to be doing anything. There's a ribbon button that you can select "Block", and I did that, wondering if I was 'training' it, but it wasn't picking up duplicates 3) I sent email to support, and wrote a post on the forum (not to self: reply to that post). By the time the folks from the home office responded, it was the next day, and first up, SPAMfighter knocked down everything that came through when Outlook opend... two thumbs up! I disabled my 'garbage collection' rule from Outlook, and told Outlook not to use the junk folder thinking it was interfering. 4) Day 2 seemed to go about like Day 1... but I hung in there. 5) Day 3 is now a whole new day... I had left Outlook open and hadn't looked at the PC since sometime late yesterday afternoon, and when I looked this morning, *every bit* of spam was in the SPAMfighter folder!! I'm a new paying customer After watching SPAMfighter work this morning, I've purchased a 1-year license, and I now can sit and watch as emails come in and disappear from my inbox into the SPAMfighter folder. No more continual tweaking of the rules. I've got SPAMfighter set to 'Very Hard' filtering... personally I'd rather pull the few real emails out of the SPAMfighter folder than pull spam out of the real folders. Yes this is simply another way of using the delete key, but you know what? ... it feels good :) Here's a screenshot of the stats after just about 48 hours of being onboard: Note that all the ones blocked by me were during Day 1 and 2... I've blocked none today, and everything is blocked. Stay in the 'Light!

    Read the article

  • getting Internet connection sharing working in a slightly more complicated configuration

    - by tirichitirca t
    I have the following configuration: Computer A - Mac OSX 10.8.4, wireless & wired adapters Computer B - Windows 7 (64 bit), wireless & wired adapters, has internet connection via the wired adapter (ethernet) d-link wired/wireless router. Problem to solve: Connect from computer A to the internet through the wired connection of computer B. I tried the following: I set up a local network between A and B using the d-link router. The configuration is this: D-link router - 192.168.0.1 A - wired connection to the d-link router, static 192.168.0.101 (I could have used the wireless but I preferred the wired connection) B - wireless connection to the d-link router DHCP 192.168.0.102 (but I made sure it always gets the same address) B - wired connection to the internet using some address that begins with 10.x.y.z. In this configuration A can see B. I enabled ICS on the wired adapter of B. I set up the Gateway of A to point to B and DNS servers to point to the DNS servers specified for the 10.x.y.z address. It doesn't work, A goes only as far as B. It can ping the 10.x.y.z address of B though. I then found this article: http://terrybritton.com/windows-internet-connection-sharing-ics-not-working-with-linux-bridging-is-the-solution-916/. Terry is suggesting that a bridge should be defined on B between the two connections. I tried that but basically computer B is screwed as soon as I create the bridge. It can't connect to the internet anymore. It is as if the network bridge seems to think the traffic to the internet should go from the wired connection to the wireless and not the other way around. The other thing that puzzles me is the router itself. In general the router needs an internet address. In a normal configuration it is the router that gets the ip address and the internet traffic goes through the router. In my case I am not interested in that. So, any suggestions to get this working? I wouldn't shy away from using a commercial software but I would think windows 7 should allow me to do it. Thanks

    Read the article

  • How to use PredicateBuilder with nested OR conditionals in Linq

    - by tblank
    I've been very happily using PredicateBuilder but until now have only used it for queries with only either concatenated AND statements or OR statements. Now for the first time I need a pair of OR statements nested along with a some AND statements like this: select x from Table1 where a = 1 AND b = 2 AND (z = 1 OR y = 2) Using the documentation from Albahari, I've constructed my expression like this: Expression<Func<TdIncSearchVw, bool>> predicate = PredicateBuilder.True<TdIncSearchVw>(); // for AND Expression<Func<TdIncSearchVw, bool>> innerOrPredicate = PredicateBuilder.False<TdIncSearchVw>(); // for OR innerOrPredicate = innerOrPredicate.Or(i=> i.IncStatusInd.Equals(incStatus)); innerOrPredicate = innerOrPredicate.Or(i=> i.RqmtStatusInd.Equals(incStatus)); predicate = predicate.And(i => i.TmTec.Equals(tecTm)); predicate = predicate.And(i => i.TmsTec.Equals(series)); predicate = predicate.And(i => i.HistoryInd.Equals(historyInd)); predicate.And(innerOrPredicate); var query = repo.GetEnumerable(predicate); This results in SQL that completely ignores the 2 OR phrases. select x from TdIncSearchVw where ((this_."TM_TEC" = :p0 and this_."TMS_TEC" = :p1) and this_."HISTORY_IND" = :p2) If I try using just the OR phrases like: Expression<Func<TdIncSearchVw, bool>> innerOrPredicate = PredicateBuilder.False<TdIncSearchVw>(); // for OR innerOrPredicate = innerOrPredicate.Or(i=> i.IncStatusInd.Equals(incStatus)); innerOrPredicate = innerOrPredicate.Or(i=> i.RqmtStatusInd.Equals(incStatus)); var query = repo.GetEnumerable(innerOrPredicate); I get SQL as expected like: select X from TdIncSearchVw where (IncStatusInd = incStatus OR RqmtStatusInd = incStatus) If I try using just the AND phrases like: predicate = predicate.And(i => i.TmTec.Equals(tecTm)); predicate = predicate.And(i => i.TmsTec.Equals(series)); predicate = predicate.And(i => i.HistoryInd.Equals(historyInd)); var query = repo.GetEnumerable(predicate); I get SQL like: select x from TdIncSearchVw where ((this_."TM_TEC" = :p0 and this_."TMS_TEC" = :p1) and this_."HISTORY_IND" = :p2) which is exactly the same as the first query. It seems like I'm so close it must be something simple that I'm missing. Can anyone see what I'm doing wrong here? Thanks, Terry

    Read the article

  • Internationalization of static pages with Rails

    - by Gavin
    I feel like I'm missing something really simple and I keep spinning my wheels on this problem. I currently have internationalization working throughout my app. The translations work and the routes work perfectly. At least, most of the site works with the exception of the routes to my two static pages, my "About" and "FAQ" pages. Every other link throughout the app points to the proper localized route. For example if I select "french" as my language, links point to the appropriate "(/:locale)/controller(.:format)." However, despite the changes I make throughout the app my links for the "About" and "FAQ" refuse to point to "../fr/static/about" and always point to "/static/about." To make matters stranger, when I run rake routes I see: "GET (/:locale)/static/:permalink(.:format) pages#show {:locale=/en|fr/}" and when I manually type in "../fr/static/about" the page translates perfectly. My Routes file: devise_for :users scope "(:locale)", :locale => /en|fr/ do get 'static/:permalink', :controller => 'pages', :action => 'show' resources :places, only: [:index, :show, :destroy] resources :homes, only: [:index, :show] match '/:locale' => 'places#index' get '/'=>'places#index',:as=>"root" end My ApplicationController: before_filter :set_locale def set_locale I18n.locale=params[:locale]||I18n.default_locale end def default_url_options(options={}) logger.debug "default_url_options is passed options: #{options.inspect}\n" { :locale => I18n.locale } end and My Pages Controller: class PagesController < ApplicationController before_filter :validate_page PAGES = ['about_us', 'faq'] def show render params[:permalink] end def validate_page redirect_to :status => 404 unless PAGES.include?(params[:permalink]) end end I'd be very grateful for any help ... it's just been one of those days. Edit: Thanks to Terry for jogging me to include views. <div class="container-fluid nav-collapse"> <ul class="nav"> <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown"><%= t(:'navbar.about') %><b class="caret"></b></a> <ul class="dropdown-menu"> <li><%=link_to t(:'navbar.about_us'), "/static/about_us"%></li> <li><%=link_to t(:'navbar.faq'), "/static/faq"%></li> <li><%=link_to t(:'navbar.blog'), '#' %></li> </ul> </li>

    Read the article

  • Visualising a 'Smarties' lid using XAML (WPF/Silverlight, Visual Studio/Blend)

    - by Mr. Disappointment
    Hi folks, First off, to clarify something in the title which could well be ambiguous/misleading, I'd like to inform you of my definition of 'Smarties', as I know often products are available all over - only under a different alias. Smarties are a candy product in the UK, little chocolate drops covered in a crispy shell which are distributed in a card tube, this tube used to have a plastic lid/top with an individual letter on the underside (they've taken a more economical approach as of late), the lid/top of the old-style tube is the main element of this question. Familiarisation Link Lid View Link Okay, now with the seller-type pitch out of the way (no, I don't work for Nestlé ;)), hopefully the question is becoming rather clear. Essentially, I'd like to recreate one of these lids using XAML, ultimately to be utilised in a Silverlight web application. That is, I'd like to result in a reusable control, of which the following is true: It looks like a Smarties lid. The colour can be specified. The letter can be specified. The control can be rotated to display either side. The second two seem trivial, but we must bare in mind that the background colour specified will almost, if not always, be the same as the foreground, leaving a visibility issue where the character content is concerned; as for the rotation, I'm hoping this kind of functionality is reasonably available, and acceptable to implement. So, to put this out there, consider a control named SmartiesLid which derives from ToggleButton (appropriate?) and further plotted out using a style in a resource dictionary which applies to it, as follows: <Style TargetType="local:SmartiesLid"> <Setter Property="Background" Value="Red"/> <Setter Property="Foreground" Value="Red"/> <Setter Property="VerticalContentAlignment" Value="Center"/> <Setter Property="HorizontalContentAlignment" Value="Center"/> <Setter Property="Template"> <Setter.Value> <ControlTemplate TargetType="local:SmartiesLid"> <Grid x:Name="LayoutRoot"> <Grid.ColumnDefinitions> <ColumnDefinition Width=".05*"/> <ColumnDefinition/> <ColumnDefinition/> <ColumnDefinition Width=".05*"/> </Grid.ColumnDefinitions> <Grid.RowDefinitions> <RowDefinition Height=".05*"/> <RowDefinition/> <RowDefinition/> <RowDefinition Height=".05*"/> <RowDefinition Height=".1*"/> </Grid.RowDefinitions> <Ellipse Grid.RowSpan="4" Grid.ColumnSpan="4" Fill="{TemplateBinding Background}" Stroke="Transparent"/> <Ellipse Grid.RowSpan="2" Grid.ColumnSpan="2" Grid.Column="1" Grid.Row="1" Fill="{TemplateBinding Background}" Stroke="Transparent"> <Ellipse.Effect> <DropShadowEffect Direction="280" ShadowDepth="6" BlurRadius="6"/> </Ellipse.Effect> </Ellipse> <TextBlock Grid.RowSpan="2" Grid.ColumnSpan="2" Grid.Column="1" Grid.Row="1" Name="LetterTextBlock" Text="{TemplateBinding Content}" Foreground="{TemplateBinding Foreground}" FontSize="190" HorizontalAlignment="Center" VerticalAlignment="Center"> </TextBlock> <!-- <Path Stretch="Fill" Grid.Row="3" Grid.RowSpan="2" Grid.Column="1" Grid.ColumnSpan="2" Fill="Black" Data="..."> How to craw the lid 'tab'? </Path> --> </Grid> <ControlTemplate.Resources> <TranslateTransform x:Key="IndentTransform" X="10" /> <RotateTransform x:Key="RotateTransform" Angle="0" /> <Storyboard x:Key="MouseOver"> </Storyboard> <Storyboard x:Key="MouseLeave"> </Storyboard> </ControlTemplate.Resources> <ControlTemplate.Triggers> <Trigger Property="IsMouseOver" Value="true"> <Trigger.EnterActions> <BeginStoryboard Storyboard="{StaticResource MouseOver}"/> </Trigger.EnterActions> <Trigger.ExitActions> <BeginStoryboard Storyboard="{StaticResource MouseLeave}"/> </Trigger.ExitActions> </Trigger> <Trigger Property="IsPressed" Value="true"> <Setter TargetName="LayoutRoot" Property="RenderTransform" Value="{StaticResource IndentTransform}"/> </Trigger> <Trigger Property="IsChecked" Value="true"> <Setter TargetName="LayoutRoot" Property="RenderTransform" Value="{StaticResource RotateTransform}"/> </Trigger> <Trigger Property="IsEnabled" Value="False"> <Setter Property="Foreground" Value="Gray"/> <Setter Property="Opacity" Value="0.5"/> </Trigger> </ControlTemplate.Triggers> </ControlTemplate> </Setter.Value> </Setter> </Style> With this in mind, can anyone give input on, in decreasing order of my incompetence in an area: Designing the overall look and feel of the damn thing (I'm no designer, and while I could hack away at this single control for days and potentially get something relatively useful, it's always a gamble). The particular barrier for me here is 'pathing' the tab of the lid, as you will see in the XAML as an element commented out. Should Path be used, or would it be more appropriate to transform a rectangle with rounded corners, or any specific suggestions? Bevelling the individually displayed letter; as detailed above, when the colour of both the foreground and background are the same then this will be invisible if no effects are applied, also for a decent level of realism I'd like to be able to apply such an effect/s. So far use of DropShadow and Balder3DEngine have fulfilled my requirements for graphics in XAML, how achievable is a bevel effect? Rotating the control on mouse-click, that is, showing the opposing face. Is this going to be possible using a style and XAML only for the design? Or is it that ugliness may rear it's head in the form of code-behind to show/hide embedded controls? Should the faces be separate controls and later somehow combined? Allowing the control to size dynamically. I'm supposing I will be able to convert a solid, absolute layout to a nice generic one when I actually have the former in place. Obviously this entails sizing the centralised letter and the lid 'tab', but that's it really, other than keeping the aspect ratio equal (since the ellipses grow nicely with the grid). Any suggestions to approaching this would be greatly appreciated, particularly with a dynamically growing font - I've done that before in a web-imaging scenario using code and System.Drawing, and wouldn't like to approach it in even a similar way. By the way, the reason I specify both WPF and Silverlight is that, from my current knowledge, the inputs being written targeting either of these will be fairly transferable for similar output by the other, albeit not without alterations in either scenario. The resulting application is in fact destined to be written in Silverlight, however, so I don't fancy inviting anything from WPF which will guarantee my only being able to convert 90% of it. I'll go give this little project a start, maybe in Blend(?), hopefully can catch up with some advice shortly. Thanks, Mr. D EDIT: Next question, ought this to be broken up into separate questions? :/

    Read the article

< Previous Page | 1 2 3 4 5