Search Results

Search found 33585 results on 1344 pages for 'sql execution plan'.

Page 777/1344 | < Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >

  • Web host that handles excess php mailing [closed]

    - by CyberJunkie
    Possible Duplicate: How to find web hosting that meets my requirements? I'm having trouble choosing a host that meets my application's needs. Please kindly inform me what I should look for when choosing a web host. My application is similar to a reminder service that reminds users of events that they scheduled. I'm sending emails to users through a PHP script. I'm not sending one email to multiple recipients. Each recipient receives a different message. I plan to use cron jobs every minute and expect the application to send roughly 200 individual emails in 1 hour (for a small user base that may grow). I don't have hosting experience with this type of application. I plan to start on a shared host and move up in the future to vps or dedicated. Most shared hosts that I looked into allow 50-100 emails per hour with delays between mailings. What do you recommend?

    Read the article

  • Need to add 30K new pages to a 10K page website - troubles ahead? (SEO)

    - by Jurga
    We have a situation with a website where we plan to add a huge amount of new pages. The domain is over 10 years old, approximately 10 thousand indexed pages, and the planned addition is approx. 30K new pages. Any idea how we should go about it? Must we schedule a gradual data release? Have you heard of any industry standards as to how many new pages per day / week / month should be added in order to appear natural and not get in trouble with Google? I.e. should we plan a bi-weekly addition of 5K?

    Read the article

  • table in drupal with edit link

    - by user544079
    I have a table created in drupal with the edit link pointing to the input form. But the problem is, it only displays the last row values in the $email and $comment variables. Can anyone suggest how to modify the table display to have the edit link to the corresponding records? function _MYMODULE_sql_to_table($sql) { $html = ""; // execute sql $resource = db_query($sql); // fetch database results in an array $results = array(); while ($row = db_fetch_array($resource)) { $results[] = $row; $email = $row['email']; $comment = $row['comment']; drupal_set_message('Email: '.$email. ' comment: '.$comment); } // ensure results exist if (!count($results)) { $html .= "Sorry, no results could be found."; return $html; } // create an array to contain all table rows $rows = array(); // get a list of column headers $columnNames = array_keys($results[0]); // loop through results and create table rows foreach ($results as $key => $data) { // create row data $row = array( 'edit' => l(t('Edit'),"admin/content/test/$email/$comment/ContactUs", $options=array()),); // loop through column names foreach ($columnNames as $c) { $row[] = array( 'data' => $data[$c], 'class' => strtolower(str_replace(' ', '-', $c)), ); } // add row to rows array $rows[] = $row; } // loop through column names and create headers $header = array(); foreach ($columnNames as $c) { $header[] = array( 'data' = $c, 'class' = strtolower(str_replace(' ', '-', $c)), ); } // generate table html $html .= theme('table', $header, $rows); return $html; } // then you can call it in your code... function _MYMODULE_some_page_callback() { $html = ""; $sql = "select * from {contactus}"; $html .= _MYMODULE_sql_to_table($sql); return $html; }

    Read the article

  • hibernate pagination mechanism

    - by haicnpmk44
    I am trying to use Hibernate pagination for my query (PostgreSQL ) i set setFirstResult(0), setMaxResults(20) for my sql query. My code like below: Session session = getSessionFactory().getCurrentSession(); session.beginTransaction(); Query query = session.createQuery("select id , customer_name , address from tbl_customers "); query.setFirstResult(0); query.setMaxResults(20); List<T> entities = query.list(); session.getTransaction().commit(); but when viewing SQL hibernate log, i still see full sql query: Hibernate: select customer0_.id as id9_, customer0_.customer_name as dst2_9_, customer0_.addres as dst3_9_ from tbl_customers customer0_ Why there is no LIMIT OFFSET in query of Hibernate pagination SQL log? Does anyone know about Hibernate pagination mechanism? I guess that Hibernate will select all data, put data into Resultset, and then paging in Resultset, right?

    Read the article

  • How to append an integer to a const char value in iphone?

    - by Warrior
    I have sql query statement which used to display the contents in the table.The sql statement consist of a where clause which is to be appended with numeric value as 1 ,2 3 etc depends upon the previously selected content.I am having the numeric value as int and i want it to append to sql statement which is const char.how can i append both the values.Please help me out. My query is == select * from Book where id=1; i have the id value is integer Thanks.

    Read the article

  • PASS Summit Feedback

    - by Rob Farley
    PASS Feedback came in last week. I also saw my dentist for some fillings... At the PASS Summit this year, I delivered a couple of regular sessions and a Lightning Talk. People told me they enjoyed it, but when the rankings came out, they showed that I didn’t score particularly well. Brent Ozar was keen to discuss it with me. Brent: PASS speaker feedback is out. You did two sessions and a Lightning Talk. How did you go? Rob: Not so well actually, thanks for asking. Brent: Ha! Sorry. Of course you know that's why I wanted to discuss this with you. I was in one of your sessions at SQLBits in the UK a month before PASS, and I thought you rocked. You've got a really good and distinctive delivery style.  Then I noticed your talks were ranked in the bottom quarter of the Summit ratings and wanted to discuss it. Rob: Yeah, I know. You did ask me if we could do this...  I should explain – my presentation style is not the stereotypical IT conference one. I throw in jokes, and try to engage the audience thoroughly. I find many talks amazingly dry, and I guess I try to buck that trend. I also run training courses, and find that I get a lot of feedback from people thanking me for keeping things interesting. That said, I also get feedback criticising me for my style, and that’s basically what’s happened here. For the rest of this discussion, let’s focus on my talk about the Incredible Shrinking Execution Plan, which I considered to be my main talk. Brent: I thought that session title was the very best one at the entire Summit, and I had it on my recommended sessions list.  In four words, you managed to sum up the topic and your sense of humor.  I read that and immediately thought, "People need to be in this session," and then it didn't score well.  Tell me about your scores. Rob: The questions on the feedback form covered the usefulness of the information, the speaker’s presentation skills, their knowledge of the subject, how well the session was described, the amount of time allocated, and the quality of the presentation materials. Brent: Presentation materials? But you don’t do slides.  Did they rate your thong? Rob: No-one saw my flip-flops in this talk, Brent. I created a script in Management Studio, and published that afterwards, but I think people will have scored that question based on the lack of slides. I wasn’t expecting to do particularly well on that one. That was the only section that didn’t have 5/5 as the most popular score. Brent: See, that sucks, because cookbook-style scripts are often some of my favorites.  Adam Machanic's Service Broker workbench series helped me immensely when I was prepping for the MCM.  As an attendee, I'd rather have a commented script than a slide deck.  So how did you rank so low? Rob: When I look at the scores that you got (based on your blog post), you got very few scores below 3 – people that felt strong enough about your talk to post a negative score. In my scores, between 5% and 10% were below 3 (except on the question about whether I knew my stuff – I guess I came as knowledgeable). Brent: Wow – so quite a few people really didn’t like your talk then? Rob: Yeah. Mind you, based on the comments, some people really loved it. I’d like to think that there would be a certain portion of the room who may have rated the talk as one of the best of the conference. Some of my comments included “amazing!”, “Best presentation so far!”, “Wow, best session yet”, “fantastic” and “Outstanding!”. I think lots of talks can be “Great”, but not so many talks can be “Outstanding” without the word losing its meaning. One wrote “Pretty amazing presentation, considering it was completely extemporaneous.” Brent: Extemporaneous, eh? Rob: Yeah. I guess they don’t realise how much preparation goes into coming across as unprepared. In many ways it’s much easier to give a written speech than to deliver a presentation without slides as a prompt. Brent: That delivery style, the really relaxed, casual, college-professor approach was one of the things I really liked about your presentation at SQLbits.  As somebody who presents a lot, I "get" it - I know how hard it is to come off as relaxed and comfortable with your own material.  It's like improv done by jazz players and comedians - if you've never tried it, you don't realize how hard it is.  People also don't realize how hard it is to make a tough subject fun. Rob: Yeah well... There will be people writing comments on this post that say I wasn't trying to make the subject fun, and that I was making it all about me. Sometimes the style works, sometimes it doesn't. Most of the comments mentioned the fact that I tell jokes, some in a nice way, but some not so much (and it wasn't just a PASS thing - that's the mix of feedback I generally get). One comment at PASS was: “great stand up comedian - not what I'm looking for at pass”, and there were certainly a few that said “too many jokes”. I’m not trying to do stand-up – jokes are my way of engaging with the audience while I demonstrate some of the amazing things that the Query Optimizer can do if you write your queries the right way. Some people didn’t think it was technical enough, but I’ve also had some people tell me that the concepts I’m explaining are deep and profound. Brent: To me, that's a hallmark of a great explanation - when someone says, "But of course it has to work that way - how could it work any other way?  It seems so simple and logical."  Well, sure it does when it's explained correctly, but now pick up any number of thick SQL Server books and try to understand the Redundant Joins concept.  I guarantee it'll take more than 45 minutes. Rob: Some people in my audiences realise that, but definitely not everyone. There's only so much you can tell someone that something is profound. Generally it's something that they either have an epiphany on or not. I like to lull my audience into knowing what's going on, and do something that surprises them. Gain their trust, build a rapport, and then show them the deeper truth of what just happened. Brent: So you've learned your lesson about presentation scores, right?  From here on out, you're going to be dry, humorless, and all your presentations will consist of you reading bullet points off the screen. Rob: No Brent, I’m not. I'm also not going to suggest that most presentations at PASS are like that. No-one tries to present like that. There's a big space to occupy between what "dry and humourless" and me. My difference is to focus on the relationship I have with the crowd, rather than focussing on delivering the perfect session. I want to see people smiling and know they're relaxed. I think most presenters focus on the material, which is completely reasonable and safe. I remember once hearing someone talking about product creation. They talked about mediocrity. They said that one of the worst things that people can ever say about your product is that it’s “good”. What you want is for 10% of the world to love it enough to want to buy it. If 10% the world gave me a dollar, I’d have more money than I could ever use (assuming it wasn’t the SAME dollar they were giving me I guess). Brent: It's the Raving Fans theory.  It's better to have a small number of raving customers than a large number of almost-but-not-really customers who don't care that much about your product or service.  I know exactly how you feel - when I got survey feedback from my Quest video presentation when I was dressed up in a Richard Simmons costume, some of the attendees said I was unprofessional and distracting.  Some of the attendees couldn't get enough and Photoshopped all kinds of stuff into the screen captures.  On a whole, I probably didn't score that well, and I'm fine with that.  It sucks to look at the scores though - do those lower scores bother you? Rob: Of course they do. It hurts deeply. I open myself up and give presentations in a very personal way. All presenters do that, and we all feel the pain of negative feedback. I hate coming 146th & 162nd out of 185, but have to acknowledge that many sessions did worse still. Plus, once I feel the wounds have healed, I’ll be able to remember that there are people in the world that rave about my presentation style, and figure that people will hopefully talk about me. One day maybe those people that don’t like my presentation style will stay away and I might be able to score better. You don’t pay to hear country music if you prefer western... Lots of people find chili too spicy, but it’s still a popular food. Brent: But don’t you want to appeal to everyone? Rob: I do, but I don’t want to be lukewarm as in Revelation 3:16. I’d rather disgust and be discussed. Well, maybe not ‘disgust’, but I don’t want to conform. Conformity just isn’t the same any more. I’m not sure I’ve ever been one to do that. I try not to offend, but definitely like to be different. Brent: Count me among your raving fans, sir.  Where can we see you next? Rob: Considering I live in Adelaide in Australia, I’m not about to appear at anyone’s local SQL Saturday. I’m still trying to plan which events I’ll get to in 2011. I’ve submitted abstracts for TechEd North America, but won’t hold my breath. I’m also considering the SQLBits conferences in the UK in April, PASS in October, and I’m sure I’ll do some LiveMeeting presentations for user groups. Online, people download some of my recent SQLBits presentations at http://bit.ly/RFSarg and http://bit.ly/Simplification though. And they can download a 5-minute MP3 of my Lightning Talk at http://www.lobsterpot.com.au/files/Collation.mp3, in which I try to explain the idea behind collation, using thongs as an example. Brent: I was in the audience for http://bit.ly/RFSarg. That was a great presentation. Rob: Thanks, Brent. Now where’s my dollar?

    Read the article

  • I spy a Live Framework portal

    - by jamiet
    Those that have followed my blogs for a while may know that I have a slightly banal interest in Windows Live and, more specifically, the Live Services developer platform'; if that doesn’t sound interesting to you then stop reading now. My interest mainly stems from the Live Mesh technology that was announced a couple of years ago and the data synchronisation platform API that underpins it; that platform is called the Live Framework or LiveFX for short. At the Professional Developer’s Conference (PDC) 2008 Microsoft made LiveFX available to the public as a Tech Preview and I spent some time learning to use it and also built a few test apps on it too. In August 2009 an announcement came that that tech preview was getting shut down: "At the Professional Developer Conference 2008, we gave the developer community access to the technical preview of the Live Framework. The Live Framework is core to our vision of providing you with a consistent programming interface. Now we are working to integrate existing services, controls and the Live Framework into the next release of Windows Live. Your feedback continues to help us build the best possible offerings for Windows Live users, for you and for your customers. " Since then news on LiveFX has disappeared save for a throwaway session at PDC09 and I was hoping that news was going to appear at this week’s MIX conference but nothing was forthcoming. Instead though today I stumbled upon an unannounced portal for future LiveFX applications on Microsoft’s Azure portal at http://live.azure.com. Check it out: I consider this to be very good news. This Azure portal was built after the LiveFX tech preview was decommissioned so seeing Live Services existing so prominently alongside Microsoft’s other cloud efforts like Windows Azure and SQL Azure vindicates my early investment in the platform and gives me hope that we’re going to see something get released very very soon. I believe that the potential uses for this platform are extremely compelling and I’m looking forward to trying some out in the near future. I am also expecting LiveFX to have a heavy dependency on the OData protocol that I talked about yesterday in my post OData.org updated - gives clues about future sql azure enhancements so you can tell where my interest in that stems from. In case you’re wondering the projects that you see listed above (Basic List Sample, JT-proj etc…) are projects that I built on the old Tech Preview platform so clearly that stuff has not gone for good which is also good news; not just because it means I’ll have access to the code I wrote before but I also assume it means that LiveFX won’t have changed much since its tech preview incarnation. I know there are other LiveFX buffs out there and hopefully this news reaches some of them. If you are one of them the please put a comment below and let me know your thoughts! @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • SQLSaturday 33 Observations

    - by Geoff N. Hiten
    Along with a lot of my colleagues, I went to SQLSaturday #33 in Charlotte this last weekend.  Overall a really good event, especially for a first-time organizer.  There is some controversy over certain events where my name got mentioned so I thought I would clear the air. Before I get to the core controversy, let's get the details out of the way.  The Microsoft Offices in Charlotte were an excellent venue for this event.  I really appreciated the Microsoft employees that helped out by letting us in and out of normally secure areas.  This is definitely above and beyond on their part. Thanks to the organizers (especially Greg and Peter) for the great hospitality they showed to the speakers.  Now for the specifics.  Like most events of this type, there was a raffle at the end for some cool swag.  As a speaker I got raffle tickets just like any other attendee.  The raffle was clearly promoted as "must be present to win".  The problem is that for various reasons, the raffle kicked off immediately after the last speaker finished in the largest room.  That room was across the parking lot from all the other rooms for the event.  I happened to have one of the last sessions of the day, and not in the main room.  I also ran long since the audience was very interactive and there were a lot of follow-up questions.  (BTW, thanks to everyone who came and stayed for my session.  Sorry it cost you the chance to win too.).  My name was drawn for an very nice piece of swag (iPod Touch if you insist).  Since I wasn't there, I didn't win. Several folks mentioned I was still speaking and was "here" (as in at the event) just not "here in the room". Yes, I was mad when I found out about it. I think that was handled poorly.  I personally lost out as did my audience (dunno if anyone specific lost anything, but it is the idea that counts).  It was a mistake. Mistakes happen.  Nobody acted maliciously.  Heck, the guys running the event who made the decision are my friends and remain so.  I got over my mad.  We talked about this privately and we are all OK with what happened.  I am not going to let a gadget get in the way of a couple of good friendships. I think the mistake was mostly due to a lack of unity between the venue buildings   Pam Shaw had a similar challenge in Tampa a few weeks ago, including a speaker who ran long on the last session (not me that time).  She had a couple of teenage volunteers to act as gofers/runners.  They counted heads in sessions, pointed people to last-minute room and session changes, and generally helped connect the organizers to what was actually happening.  Note that this was not Pam's first SQLSaturday event.  She knew but the knowledge had not been institutionalized.  We (The SQL community in general and SQLSaturday organizers in particular) now know how essential gofers are to success. I know I spent most of this post focusing on the controversy, but I wanted to clear everything up.  I don't want to let a minor mistake, made in good faith, overshadow what was a tremendously good event for the community. As for the iPod Touch, someone in the SQL community is enjoying it, so it is not a total loss.  And if losing out on it is the price I pay so we can learn this, then that is what a community leader does.  Consider it a gift.  Besides, I really wanted a Zune 120 :)

    Read the article

  • Meet SQLBI at PASS Summit 2012 #sqlpass

    - by Marco Russo (SQLBI)
    Next week I and Alberto Ferrari will be in Seattle at PASS Summit 2012. You can meet us at our sessions, at a book signing and hopefully watching some other session during the conference. Here are our appointments: Thursday, November 08, 2012, 10:15 AM - 11:45 AM – Alberto Ferrari – Room 606-607 Querying and Optimizing DAX (BIA-321-S) Do you want to learn how to write DAX queries and how to optimize them? Don’t miss this session! Thursday, November 08, 2012, 12:00 PM - 12:30 PM – Bookstore Book signing event at the Bookstore corner with Alberto Ferrari, Marco Russo and Chris Webb Visit the bookstore and sign your copy of our Microsoft SQL Server 2012 Analysis Services: The BISM Tabular Model book. Thursday, November 08, 2012, 1:30 PM - 2:45 PM – Marco Russo – Room 611 Near Real-Time Analytics with xVelocity (without DirectQuery) (BIA-312) What’s the latency you can tolerate for your data? Discover what is the limit in Tabular without using DirectQuery and learn how to optimize your data model and your queries for a near real-time analytical system. Not a trivial task, but more affordable than you might think. Friday, November 09, 2012, 9:45 AM - 11:00 AM Parent-Child Hierarchies in Tabular (BIA-301) Multidimensional has a more advanced support for hierarchies than Tabular, but in reality you can do almost the same things by using data modeling, DAX functions and BIDS Helper!  Friday, November 09, 2012, 1:00 PM - 2:15 PM – Marco Russo – Room 612 Inside DAX Query Plans (BIA-403) Discover the query plan for your DAX query and learn how to read it and how to optimize a DAX query by using these information. If you meet us at the conference, stop us and say hello: it’s always nice to know our readers!

    Read the article

  • Some thoughts on email hosting for one’s own domain

    - by jamiet
    I have used the same email providers for my own domains for a few years now however I am considering moving over to a new provider. In this email I’ll share my current thoughts and hopefully I’ll get some feedback that might help me to decide on what to do next. What I use today I have three email addresses that I use primarily (I have changed the domains in this blog post as I don’t want to give them away to spammers): [email protected] – My personal account that I give out to family and friends and which I use to register on websites [email protected]  - An account that I use to catch email from the numerous mailing lists that I am on [email protected] – I am a self-employed consultant so this is an account that I hand out to my clients, my accountant, and other work-related organisations Those two domains (jtpersonaldomain.com & jtworkdomain.com) are both managed at http://domains.live.com which is a fantastic service provided by Microsoft that for some perplexing reason they never bother telling anyone about. It offers multiple accounts (I have seven at jtpersonaldomain.com though as already stated I only use two of them) which are accessed via Outlook.com (formerly Hotmail.com) along with usage reporting plus a few other odds and sods that I never use. Best of all though, its totally free. In addition, given that I have got both domains hosted using http://domains.live.com I can link my various accounts together and switch between them at Outlook.com without having to login and logout: N.B. You’ll notice that there are two other accounts listed there in addition to the three I already mentioned. One is my mum’s account which helps me provide IT support/spam filtering services to her and the other is the donation account for AdventureWorks on Azure. I find that linking feature to be very handy indeed. Finally, http://domains.live.com is the epitome of “it just works”. I set up jtworkdomain.com at http://domains.live.com over three years ago and I am pretty certain I haven’t been back there even once to administer it. Proposed changes OK, so if I like http://domains.live.com so much why am I considering changing? Well, I earn my corn in the Microsoft ecosystem and if I’m reading the tea-leaves correctly its looking increasingly likely that the services that I’m going to have to be familiar with in the future are all going to be running on top of and alongside Windows Azure Active Directory and Office 365 respectively. Its clear to me that Microsoft’s are pushing their customers toward cloud services and, like it or lump it, data integration developers like me may have to come along for the ride. I don’t think the day is too far off when we can log into Windows Azure SQL Database (aka SQL Azure), Team Foundation Service, Dynamics etc… using the same credentials that are currently used for Office 365 and over time I would expect those things to get integrated together a lot better – that integration will be based upon a Windows Azure Active Directory identity. This should not come as a surprise, in my opinion Microsoft’s whole enterprise play over the past 15 or 20 years can be neatly surmised as “get people onto Windows Server and Active Directory then upsell from there” – in the not-too-distant-future the only difference is that they’re trying to do it in the cloud. I want to get familiar with these services and hence I am considering moving jtworkdomain.com onto Office 365. I’ll lose the convenience of easily being able to switch to that account at Outlook.com and moreover I’ll have to start paying for it (I think it’ll be about fifty quid a year – not a massive amount but its quite a bit more than free) but increasingly this is beginning to look like a move I have to make. So that’s where my head is at right now. Anyone have any relevant thoughts or experiences to share? Please let me know in the comments below. @Jamiet

    Read the article

  • Unused Indexes Gotcha

    - by DavidWimbush
    I'm currently looking into dropping unused indexes to reduce unnecessary overhead and I came across a very good point in the excellent SQL Server MVP Deep Dives book that I haven't seen highlighted anywhere else. I was thinking it was simply a case of dropping indexes that didn't show as being used in DMV sys.dm_db_index_usage_stats (assuming a solid representative workload had been run since the last service start). But Rob Farley points out that the DMV only shows indexes whose pages have been read or updated. An index that isn't listed in the DMV might still be useful by providing metadata to the Query Optimizer and thus streamlining query plans. For example, if you have a query like this: select  au.author_name         , count(*) as books from    books b         inner join authors au on au.author_id = b.author_id group by au.author_name If you have a unique index on authors.author_name the Query Optimizer will realise that each author_id will have a different author_name so it can produce a plan that just counts the books by author_id and then adds the author name to each row in that small table. If you delete that index the query will have to join all the books with their authors and then apply the GROUP BY - a much more expensive query. So be cautious about dropping apparently unused unique indexes.

    Read the article

  • Multidimensional Thinking–24 Hours of Pass: Celebrating Women in Technology

    - by smisner
    It’s Day 1 of #24HOP and it’s been great to participate in this event with so many women from all over the world in one long training-fest. The SQL community has been abuzz on Twitter with running commentary which is fun to watch while listening to the current speaker. If you missed the fun today because you’re busy with all that work you’ve got to do – don’t despair. All sessions are recorded and will be available soon. Keep an eye on the 24 Hours of Pass page for details. And the fun’s not over today. Rather than run 24 hours consecutively, #24HOP is now broken down into 12-hours over two days, so check out the schedule to see if there’s a session that interests you and fits your schedule. I’m pleased to announce that my business colleague Erika Bakse ( Blog | Twitter) will be presenting on Day 2 – her debut presentation for a PASS event. (And I’m also pleased to say she’s my daughter!) Multidimensional Thinking: The Presentation My contribution to this lineup of terrific speakers was Multidimensional Thinking. Here’s the abstract: “Whether you’re developing Analysis Services cubes or creating PowerPivot workbooks, you need to get into a multidimensional frame of mind to produce a model that best enables users to answer their business questions on their own. Many database professionals struggle initially with multidimensional models because the data modeling process is much different than the one they use to produce traditional, third normal form databases. In this session, I’ll introduce you to the terminology of multidimensional modeling and step through the process of translating business requirements into a viable model.” If you watched the presentation and want a copy of the slides, you can download a copy here. And you’re welcome to download the slides even if you didn’t watch the presentation, but they’ll make more sense if you did! Kimball All the Way There’s only so much I can cover in the time allotted, but I hope that I succeeded in my attempt to build a foundation that prepares you for starting out in business intelligence. One of my favorite resources that will get into much more detail about all kinds of scenarios (well beyond the basics!) is The Data Warehouse Toolkit (Second Edition) by Ralph Kimball. Anything from Kimball or the Kimball Group is worth reading. Kimball material might take reading and re-reading a few times before it makes sense. From my own experience, I found that I actually had to just build my first data warehouse using dimensional modeling on faith that I was going the right direction because it just didn’t click with me initially. I’ve had years of practice since then and I can say it does get easier with practice. The most important thing, in my opinion, is that you simply must prototype a lot and solicit user feedback, because ultimately the model needs to make sense to them. They will definitely make sure you get it right! Schema Generation One question came up after the presentation about whether we use SQL Server Management Studio or Business Intelligence Development Studio (BIDS) to build the tables for the dimensional model. My answer? It really doesn’t matter how you create the tables. Use whatever method that you’re comfortable with. But just so happens that it IS possible to set up your design in BIDS as part of an Analysis Services project and to have BIDS generate the relational schema for you. I did a Webcast last year called Building a Data Mart with Integration Services that demonstrated how to do this. Yes, the subject was Integration Services, but as part of that presentation, I showed how to leverage Analysis Services to build the tables, and then I showed how to use Integration Services to load those tables. I blogged about this presentation in September 2010 and included downloads of the project that I used. In the blog post, I explained that I missed a step in the demonstration. Oops. Just as an FYI, there were two more Webcasts to finish the story begun with the data – Accelerating Answers with Analysis Services and Delivering Information with Reporting Services. If you want to just cut to the chase and learn how to use Analysis Services to build the tables, you can see the Using the Schema Generation Wizard topic in Books Online.

    Read the article

  • How to use a list of values in Excel as filter in a query

    - by Luca Zavarella
    It often happens that a customer provides us with a list of items for which to extract certain information. Imagine, for example, that our clients wish to have the header information of the sales orders only for certain orders. Most likely he will give us a list of items in a column in Excel, or, less probably, a simple text file with the identification code:     As long as the given values ??are at best a dozen, it costs us nothing to copy and paste those values ??in our SSMS and place them in a WHERE clause, using the IN operator, making sure to include the quotes in the case of alphanumeric elements (the database sample is AdventureWorks2008R2): SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( 'SO43667' ,'SO43709' ,'SO43726' ,'SO43746' ,'SO43782' ,'SO43796') Clearly, the need to add commas and quotes becomes an hassle when dealing with hundreds of items (which of course has happened to us!). It’d be comfortable to do a simple copy and paste, leaving the items as they are pasted, and make sure the query works fine. We can have this commodity via a User Defined Function, that returns items in a table. Simply we’ll provide the function with an input string parameter containing the pasted items. I give you directly the T-SQL code, where comments are there to clarify what was written: CREATE FUNCTION [dbo].[SplitCRLFList] (@List VARCHAR(MAX)) RETURNS @ParsedList TABLE ( --< Set the item length as your needs Item VARCHAR(255) ) AS BEGIN DECLARE --< Set the item length as your needs @Item VARCHAR(255) ,@Pos BIGINT --< Trim TABs due to indentations SET @List = REPLACE(@List, CHAR(9), '') --< Trim leading and trailing spaces, then add a CR\LF at the end of the list SET @List = LTRIM(RTRIM(@List)) + CHAR(13) + CHAR(10) --< Set the position at the first CR/LF in the list SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< If exist other chars other than CR/LFs in the list then... IF REPLACE(@List, CHAR(13) + CHAR(10), '') <> '' BEGIN --< Loop while CR/LFs are over (not found = CHARINDEX returns 0) WHILE @Pos > 0 BEGIN --< Get the heading list chars from the first char to the first CR/LF and trim spaces SET @Item = LTRIM(RTRIM(LEFT(@List, @Pos - 1))) --< If the so calulated item is not empty... IF @Item <> '' BEGIN --< ...insert it in the @ParsedList temporary table INSERT INTO @ParsedList (Item) VALUES (@Item) --(CAST(@Item AS int)) --< Use the appropriate conversion if needed END --< Remove the first item from the list... SET @List = RIGHT(@List, LEN(@List) - @Pos - 1) --< ...and set the position to the next CR/LF SET @Pos = CHARINDEX(CHAR(13) + CHAR(10), @List, 1) --< Repeat this block while the upon loop condition is verified END END RETURN END At this point, having created the UDF, our query is transformed trivially in: SELECT * FROM Sales.SalesOrderHeader AS SOH WHERE SOH.SalesOrderNumber IN ( SELECT Item FROM SplitCRLFList('SO43667 SO43709 SO43726 SO43746 SO43782 SO43796') AS SCL) Convenient, isn’t it? You can find the script DBA_SplitCRLFList.sql here. Bye!!

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • Observable Adapter

    - by Roman Schindlauer
    .NET 4.0 introduced a pair of interfaces, IObservable<T> and IObserver<T>, supporting subscriptions to and notifications for push-based sequences. In combination with Reactive Extensions (Rx), these interfaces provide a convenient and uniform way of describing event sources and sinks in .NET. The StreamInsight CTP refresh in November 2009 included an Observable adapter supporting “reactive” event inputs and outputs.   While we continue to believe it enables an important programming model, the Observable adapter was not included in the final (RTM) release of Microsoft StreamInsight 1.0. The release takes a dependency on .NET 3.5 but for timing reasons could not take a dependency on .NET 4.0. Shipping a separate copy of the observable interfaces in StreamInsight – as we did in the CTP refresh – was not a viable option in the RTM release.   Within the next months, we will be shipping another preview of the Observable adapter that targets .NET 4.0. We look forward to gathering your feedback on the new adapter design! We plan to include the Observable adapter implementation into the product in a future release of Microsoft StreamInsight. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • How to Manage Technical Employees

    - by Ajarn Mark Caldwell
    In my current position as Software Engineering Manager I have been through a lot of ups and downs with staffing, ranging from laying-off everyone who was on my team as we went through the great economic downturn in 2007-2008, to numerous rounds of interviewing and hiring contractors, full-time employees, and converting some contractors to employee status.  I have not yet blogged much about my experiences, but I plan to do that more in the next few months.  But before I do that, let me point you to a great article that somebody else wrote on The Unspoken Truth About Managing Geeks that really hits the target.  If you are a non-technical person who manages technical employees, you definitely have to read that article.  And if you are a technical person who has been promoted into management, this article can really help you do your job and communicate up the line of command about your team.  When you move into management with all the new and different demands put on you, it is easy to forget how things work in the tech subculture, and to lose touch with your team.  This article will help you remember what’s going on behind the scenes and perhaps explain why people who used to get along great no longer are, or why things seem to have changed since your promotion. I have to give credit to Andy Leonard (blog | twitter) for helping me find that article.  I have been reading his series of ramble-rants on managing tech teams, and the above article is linked in the first rant in the series, entitled Goodwill, Negative and Positive.  I have read a handful of his entries in this series and so far I pretty much agree with everything he has said, so of course I would encourage you to read through that series, too.

    Read the article

  • Is your TRY worth catching?

    - by Maria Zakourdaev
      A very useful error handling TRY/CATCH construct is widely used to catch all execution errors  that do not close the database connection. The biggest downside is that in the case of multiple errors the TRY/CATCH mechanism will only catch the last error. An example of this can be seen during a standard restore operation. In this example I attempt to perform a restore from a file that no longer exists. Two errors are being fired: 3201 and 3013: Assuming that we are using the TRY and CATCH construct, the ERROR_MESSAGE() function will catch the last message only: To workaround this problem you can prepare a temporary table that will receive the statement output. Execute the statement inside the xp_cmdshell stored procedure, connect back to the SQL Server using the command line utility sqlcmd and redirect it's output into the previously created temp table.  After receiving the output, you will need to parse it to understand whether the statement has finished successfully or failed. It’s quite easy to accomplish as long as you know which statement was executed. In the case of generic executions you can query the output table and search for words like“Msg%Level%State%” that are usually a part of the error message.Furthermore, you don’t need TRY/CATCH in the above workaround, since the xp_cmdshell procedure always finishes successfully and you can decide whether to fire the RAISERROR statement or not. Yours, Maria

    Read the article

  • PASS Summit 2011 &ndash; Part I

    - by Tara Kizer
    What an amazing week I had at PASS Summit 2011 in Seattle, WA!  I hadn’t attended a PASS conference since September of 2005 when it was in Grapevine, Texas.  It has grown so much since then.  I am not sure how many people attended back then, but I’d guesstimate about 1500.  They announced that at this year’s conference there were 4000 attendees.  WOW! Here are my favorite aspects of this conference: Networking! – Not only did I meet a lot of new people, but I also got to meet people in person that I’ve known on the Internet for years like Mladen Prajdic (blog|twitter) and Rob Volk (blog|twitter).  I even met someone that I’d recently helped out in the SQLTeam forums.  Learning – I took a lot of notes during the sessions I attended and plan on blogging very soon about them.  It is amazing the amount of things you learn and the things that you unlearn.  Yes I said unlearn.  Some of the stuff that I thought I knew was either out-dated or just plain wrong.  Fun, fun, fun – To say that this conference was fun would be an understatement.  I had a blast!  I attended the “Welcome Reception and Quizbowl” on Tuesday night, the “Exhibitor Reception” on Wednesday night, and the “Community Appreciation Party” at GameWorks on Thursday night.  There were many other after-hours events to attend, but I had to make my kids a priority at night so I had to get back to my hotel room before 9pm so that I could Skype with them.   It was very entertaining reading and posting with #sqlpass on Twitter.  Twitter has changed the conference experience for the better.  I will definitely be able to do my job better due to attending this conference.  The return on investment is HUGE!

    Read the article

  • Observable Adapter

    - by Roman Schindlauer
    .NET 4.0 introduced a pair of interfaces, IObservable<T> and IObserver<T>, supporting subscriptions to and notifications for push-based sequences. In combination with Reactive Extensions (Rx), these interfaces provide a convenient and uniform way of describing event sources and sinks in .NET. The StreamInsight CTP refresh in November 2009 included an Observable adapter supporting “reactive” event inputs and outputs.   While we continue to believe it enables an important programming model, the Observable adapter was not included in the final (RTM) release of Microsoft StreamInsight 1.0. The release takes a dependency on .NET 3.5 but for timing reasons could not take a dependency on .NET 4.0. Shipping a separate copy of the observable interfaces in StreamInsight – as we did in the CTP refresh – was not a viable option in the RTM release.   Within the next months, we will be shipping another preview of the Observable adapter that targets .NET 4.0. We look forward to gathering your feedback on the new adapter design! We plan to include the Observable adapter implementation into the product in a future release of Microsoft StreamInsight. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Optimize Many-to-Many with SUMMARIZE and Other Techniques

    - by Marco Russo (SQLBI)
    We are still in the early days of DAX and even if I have been using it since 2 years ago, there is still a lot to learn on that. One of the topics that historically interests me (and many of the readers here, probably) is the many-to-many relationships between dimensions in a dimensional data model. When I and Alberto wrote the The Many to Many Revolution 2.0 we discovered the SUMMARIZE based pattern very late in the whitepaper writing. It is very important for performance optimization and it should be always used. In the last month, Gerhard Brueckl also presented an approach based on cross table filtering behavior that simplify the syntax involved, even if it’s harder to explain how it works internally. I published a short article titled Optimize Many-to-Many Calculation in DAX with SUMMARIZE and Cross Table Filtering on SQLBI website just to provide a quick reference to the three patterns available. A further study is still required to compare performance between SUMMARIZE and Cross Table Filtering patterns. Up to now, I haven’t observed big differences between them, even if their execution plans might be not identical and this suggest me that depending on other conditions you might favor one over the other.

    Read the article

  • TechEd North America 2012–Day 3 #msTechEd #teched

    - by Marco Russo (SQLBI)
    Yesterday I spent the longest day at this TechEd: we talked with many people at Community Night until 9pm and I have to say that just a few months after Analysis Services 2012 has been released, there are many people already using it. And the adoption of PowerPivot is starting to be quite large. Many new ideas and challenging coming from several different real world scenarios. I was tired but really happy. Alberto presented his Many-to-Many Relationships in BISM Tabular session that was in the same time slot of the BI Power Hour. For this reason, very few people attended Alberto’s session so I think many will watch the recorded session (it should be available within a few days). So what about today? I’ll spend some time at Technical Learning Center area (full schedule here) but the most important event today will be the Querying multi-billion rows with many to many relationships in SSAS Tabular (xVelocity) at the Private Cloud, Public Cloud and Data Platform Theater in the Technical Learning Center area (next to the SQL Server 2012 zone).  Why you should attend? Mainly because you will see live demo over 4 billion rows table with many-to-many relationships involved in complex queries. But for those of you that think this is not enough to attend a 15 minute funny session, well, we’ll give away some 8GB USB Memory Keys to those of you that will guess exact response time of queries before execution. Convinced? Join us at 11:15am and don’t be late, the session will finish at 11:30am! After that, we’ll run a book signing session at the Bookstore at 12:30pm and I will be in the Technical Learning Center area at 3:00pm until 5:00pm. See you there!

    Read the article

  • Intel z77 vs h77 for intensive compiling, gaming [closed]

    - by Bilal Akhtar
    I'm in the market for a desktop motherboard (preferably ATX) that functions well with Intel i7-3770 Ivy Bridge processor at 3.4 GHz with LGA1155 socket. That processor is very fast, and it should handle all my tasks. My question is about the type of motherboard chipset I should choose to accompany it. I plan to use my rig for compiling and developing Debian package and other OS components, web development, occasional Android apps, chroots, VMs, FlightGear, other gaming but nothing serious, and heavy multitasking, all on Ubuntu. I do NOT plan to overclock, and I never will, so that's not a cause of concern for me. That said, I'm down to three chipset choices: Intel H77 Intel Z68 Intel Z77 I'm planning to go for H77 since I don't need any of the new features in Z77. I don't plan to use a second GPU and I will never overclock my CPU/GPU. My question is, will H77 based MoBos handle all my tasks well? Intel advertises that chipset as "everyday computing" but other sites say it's base functionality is the same as Z77. Intel rather advertises Z77 for "serious multitaskers, hardcore gamers and overclocking enthusiasts". But the problem with all Z77 motherboards I've seen is, they're way too expensive and their main feature seems to be overclocking, which won't be useful to me. Will I lose any raw CPU/GPU performance or HDD R/w with the H77 when comparing it to a Z77? Will heat, etc be an issue too? From what I've seen, Z77 motherboards have larger heat sinks when compared to H77 ones. Will that be an issue too, if I go with an H77 motherboard with no heat sinks for the chipset? The CPU will have a fan in both cases, of course. tl;dr When it comes to CPU/GPU performance and HDD r/w, is the Intel H77 chipset slower than the Z77? I don't care about overclocking or multiple GPUs, and for the processor, I'm set on Ivy Bridge i7-3770.

    Read the article

  • Error during configuring kerberos5 using macports

    - by ario
    While trying to install libmemcached via MacPorts, I hit the following issue: libmemcached @0.40 +universal ---> Computing dependencies for libmemcached ---> Dependencies to be installed: cyrus-sasl2 kerberos5 ---> Configuring kerberos5 Error: org.macports.configure for port kerberos5 returned: configure failure: command execution failed Error: Failed to install kerberos5 It tells me to look in the log for details. Here's the last bit of the log file: :info:configure checking for setupterm in -lcurses... no :info:configure checking for setupterm in -lncurses... no :info:configure checking for tgetent... no :info:configure configure: error: Could not find tgetent; are you missing a curses/ncurses library? :info:configure configure: error: /bin/sh './configure' failed for appl/telnet :info:configure Command failed: cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/work/krb5-1.7.2/src" && ./configure --prefix=/opt/local --disable-dependency-tracking --mandir=/opt/local/share/man :info:configure Exit code: 1 :error:configure org.macports.configure for port kerberos5 returned: configure failure: command execution failed :debug:configure Error code: NONE :debug:configure Backtrace: configure failure: command execution failed while executing "$procedure $targetname" :info:configure Warning: targets not executed for kerberos5: org.macports.activate org.macports.configure org.macports.build org.macports.destroot org.macports.install :error:configure Failed to install kerberos5 :debug:configure Registry error: kerberos5 not registered as installed & active. invoked from within "registry_active ${subport}" invoked from within "$workername eval registry_active \${subport}" :notice:configure Please see the log file for port kerberos5 for details: /opt/local/var/macports/logs/_opt_local_var_macports_sources_rsync.macports.org_release_ports_net_kerberos5/kerberos5/main.log It seems to say it's missing ncurses. Looks like it's there though, since if I run port installed I see these: ncurses @5.7_0 ncurses @5.9_1 (active) ncursesw @5.7_0 Any ideas on how to get around this error?

    Read the article

< Previous Page | 773 774 775 776 777 778 779 780 781 782 783 784  | Next Page >