Search Results

Search found 1008 results on 41 pages for 'kevin hammett'.

Page 3/41 | < Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >

  • PHP - Retrieve Data From mySQL Server

    - by Kevin
    Hello, Does anyone know how to retrieve a piece of data and display the results in php file? A similar query that I would enter is something like this: SELECT 'email' FROM 'users' WHERE 'username' = 'bob' Thus, the result would be just the email. Thanks, Kevin

    Read the article

  • ASP.NET 4 Unleashed in Bookstores!

    - by Stephen Walther
    I’m happy to announce that ASP.NET 4 Unleashed is now in bookstores! The book is over 1,800 pages and it is packed with code samples and tutorials on all the features of ASP.NET 4. Given the size of the book – did I mention that it is over 1,800 pages? -- I can safely say that it is the most comprehensive book on ASP.NET  This edition of the book has several new chapters written by Kevin Hoffman and Nate Dudek. Kevin and Nate did a fantastic job of covering the new features of ASP.NET 4 including: The new ASP.NET Chart Control The new ASP.NET QueryExtender Control The new ASP.NET routing framework jQuery You can buy the book from your local bookstore or buy the book from Amazon:

    Read the article

  • SQL SERVER – Solution – Puzzle – Statistics are not Updated but are Created Once

    - by pinaldave
    Earlier I asked puzzle why statistics are not updated. Read the complete details over here: Statistics are not Updated but are Created Once In the question I have demonstrated even though statistics should have been updated after lots of insert in the table are not updated.(Read the details SQL SERVER – When are Statistics Updated – What triggers Statistics to Update) In this example I have created following situation: Create Table Insert 1000 Records Check the Statistics Now insert 10 times more 10,000 indexes Check the Statistics – it will be NOT updated Auto Update Statistics and Auto Create Statistics for database is TRUE Now I have requested two things in the example 1) Why this is happening? 2) How to fix this issue? I have many answers – here is the how I fixed it which has resolved the issue for me. NOTE: There are multiple answers to this problem and I will do my best to list all. Solution: Create nonclustered Index on column City Here is the working example for the same. Let us understand this script and there is added explanation at the end. -- Execution Plans Difference -- Estimated Execution Plan Vs Actual Execution Plan -- Create Sample Database CREATE DATABASE SampleDB GO USE SampleDB GO -- Create Table CREATE TABLE ExecTable (ID INT, FirstName VARCHAR(100), LastName VARCHAR(100), City VARCHAR(100)) GO CREATE NONCLUSTERED INDEX IX_ExecTable1 ON ExecTable (City); GO -- Insert One Thousand Records -- INSERT 1 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -------------------------------------------------------------- -- Round 2 -- Insert One Thousand Records -- INSERT 2 INSERT INTO ExecTable (ID,FirstName,LastName,City) SELECT TOP 1000 ROW_NUMBER() OVER (ORDER BY a.name) RowID, 'Bob', CASE WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%2 = 1 THEN 'Smith' ELSE 'Brown' END, CASE WHEN ROW_NUMBER() OVER (ORDER BY a.name)%20 = 1 THEN 'New York' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 5 THEN 'San Marino' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 3 THEN 'Los Angeles' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 7 THEN 'La Cinega' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 13 THEN 'San Diego' WHEN  ROW_NUMBER() OVER (ORDER BY a.name)%20 = 17 THEN 'Las Vegas' ELSE 'Houston' END FROM sys.all_objects a CROSS JOIN sys.all_objects b GO -- Select Statement SELECT FirstName, LastName, City FROM ExecTable WHERE City  = 'New York' GO -- Display statistics of the table sp_helpstats N'ExecTable', 'ALL' GO -- Replace your Statistics over here DBCC SHOW_STATISTICS('ExecTable', IX_ExecTable1); GO -- Clean up Database DROP TABLE ExecTable GO When I created non clustered index on the column city, it also created statistics on the same column with same name as index. When we populate the data in the column the index is update – resulting execution plan to be invalided – this leads to the statistics to be updated in next execution of SELECT. This behavior does not happen on Heap or column where index is auto created. If you explicitly update the index, often you can see the statistics are updated as well. You can see this is for sure happening if you follow the tell of John Sansom. John Sansom‘s suggestion: That was fun! Although the column statistics are invalidated by the time the second select statement is executed, the query is not compiled/recompiled but instead the existing query plan is reused. It is the “next” compiled query against the column statistics that will see that they are out of date and will then in turn instantiate the action of updating statistics. You can see this in action by forcing the second statement to recompile. SELECT FirstName, LastName, City FROM ExecTable WHERE City = ‘New York’ option(RECOMPILE) GO Kevin Cross also have another suggestion: I agree with John. It is reusing the Execution Plan. Aside from OPTION(RECOMPILE), clearing the Execution Plan Cache before the subsequent tests will also work. i.e., run this before round 2: ————————————————————– – Clear execution plan cache before next test DBCC FREEPROCCACHE WITH NO_INFOMSGS; ————————————————————– Nice puzzle! Kevin As this was puzzle John and Kevin both got the correct answer, there was no condition for answer to be part of best practices. I know John and he is finest DBA around – his tremendous knowledge has always impressed me. John and Kevin both will agree that clearing cache either using DBCC FREEPROCCACHE and recompiling each query every time is for sure not good advice on production server. It is correct answer but not best practice. By the way, if you have better solution or have better suggestion please advise. I am open to change my answer and publish further improvement to this solution. On very separate note, I like to have clustered index on my Primary Key, which I have not mentioned here as it is out of the scope of this puzzle. Reference: Pinal Dave (http://blog.SQLAuthority.com) Filed under: Pinal Dave, PostADay, Readers Contribution, Readers Question, SQL, SQL Authority, SQL Index, SQL Puzzle, SQL Query, SQL Scripts, SQL Server, SQL Tips and Tricks, T SQL, Technology Tagged: Statistics

    Read the article

  • Silverlight Cream for February 26, 2011 -- #1052

    - by Dave Campbell
    In this Issue: Mark Monster, Gill Cleeren, Pencho Popadiyn, Kevin Dockx, Joost van Schaik, Jesse Liberty, John Papa, Jeremy Likness, Arik Poznanski(-2-), Page Brooks, Deborah Kurata, Mike Snow, Alfred Astort, Samuel Jack, XAMLNinja, and Shawn Wildermuth. Above the Fold: Silverlight: "Asynchronous Callbacks with Rx" Jesse Liberty WP7: "Phoney Windows Phone 7 Project Now Available!" Shawn Wildermuth MVVM: "Validating our ViewModel" Mark Monster Shoutouts: Shawn Wildermuth has a video up of his FadingMessage class to show it off: Introducing Phoney's FadingMessage Class From SilverlightCream.com: Validating our ViewModel Mark Monster discusses Validation in his latest post... using INotifyDataErrorInfo and his own implementation of a ViewModel base that supports it and INPC. Getting ready for Microsoft Silverlight Exam 70-506 (Part 7) Gill Cleeren hits part 7 of his series at SilverlightShow on a great walk through Silverlight and getting ready for the exam. This is the final part and concentrates on deploying apps. Windows Phone 7–Creating Custom Keyboard Pencho Popadiyn has a post at SilverlightShow discussing problems with WP7 keyboards in his native Bulgaria, and his solution to the problem... create his own. 360 Degrees Feedback by Kevin Dockx Kevin Dockx produced a white paper for his company about an employee review solution they did in Silverlight. The white paper is available, and SilverlightShow interviewd Kevin to answer questions about the app. Extended Windows Phone 7 page for handling rotation, focused element updates and back key press Looks like Joost van Schaik has a few posts I've missed... and I'm not going to get to them all today! ... this one is about the base class he uses for WP7 apps... a bunch of utilities he uses... definitely worth a look (and a take). Asynchronous Callbacks with Rx Jesse Liberty has his 8th post in the Rx series up and this one's on Asynchronous Callbacks... if you haven't seen this before, you should definitely look into it... cool stuff, Jesse! Silverlight TV 63: Exploring National Instruments' App Using Data and Business Features John Papa has Silverlight TV number 63 up and is talking to Steve Lasker about National Instruments and their Lab View product. Great demo and discussion. Jounce Part 11: Debugging MEF Jeremy Likness's latest (number 11) in his series on his MVVM framework Jounce is out, and he's discussing how to debug MEF, which Jounce handles nicely through the logging he provides... and you can use it externally to Jounce. Get Twitter Trends on Windows Phone 7 Arik Poznanski has a couple Twitter for WP7 posts up... first is one for pulling Twitter trends from whatthetrend.com... plus the code to do it. Searching Twitter on Windows Phone 7 In his next post, Arik Poznanski shows how to search twitter from your WP7 ... again with code. Tiled Background Control in Silverlight Page Brooks shows how to get a tiled background control in Silverlight ... did you know there was one in the JetPack them? Silverlight Charting: Displaying Data Above the Column Deborah Kurata continues her charting posts with this one displaying the column value above the column. I like this... it has a clean look and all the data is available at a glance. Silverlight: Tasks on the Win7 Mobile Phone Mike Snow has a list of the WP7 tasks available and an example of using them... looks like a pretty good reference! 10 of 10 - Aesthetics and alignment matter Alfred Astort discusses aesthetics and WP7 dev... looks like it's the same as any app development, but if you're not doing it, you should be. Simon Squared – We have Multi-player: Days 4, 5 and (ahem!) 6 Samuel Jack details the completion of his multi-player game for WP7 utilizing Azure, in the hour-by-hour detail he's done the rest... plus a video of the final product! Who ate all the pies!! XAMLNinja has a very good discussion/link set of Charting posts all leading up to a portrait-only version of charting for WP7 with labels that looks looks great Phoney Windows Phone 7 Project Now Available! Shawn Wildermuth has a collection of classes he always uses with WP7 dev, and he's sharing them with all of us a "Phoney" Tools project on Codeplex... and now has a NuGet project also. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Silverlight Cream for April 09, 2010 -- #835

    - by Dave Campbell
    In this Issue: Tim Heuer, smartyP, and Kevin Moore. From SilverlightCream.com: Using XNA libraries in your Silverlight Windows Phone 7 applications Tim Heuer has a post up using XNA on WP7 to hook up sound to a 'normal' Silverlight WP7 app... so there ya go! Example Pivot Control for Windows Phone 7 smartyP acknowledges that he said he was done with the Pivot control for WP7 and yes he realizes we're most likely going to get one from Microsoft, but just like the rest of us, he just couldn't leave it alone :) Bag of Tricks Update (two years in the making) I found this via Cool view transitions using ZapScoller by Rudi Grobler, and it points at Kevin Moore's Bag of Tricks Update for Silverlight 4 and WPF ... just the fact that Robby Ingebretsen is using it means we should all rush to CodePlex and absorb it :) Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Dartisans Ep. 7 - Dart news and special guests

    Dartisans Ep. 7 - Dart news and special guests In this episode of Dartisans, we talk to Kevin Moore and Bob Nystrom about the latest language changes as Dart heads to M1. We'll also chat about Dart's package manager and Kevin's proposed package structure. If we're lucky, we'll get a sneak peek at the Seattle Dart Hackathon (on July 14th) and Bob's Dart talk at OSCON. Learn more at www.dartlang.org From: GoogleDevelopers Views: 211 12 ratings Time: 42:04 More in Science & Technology

    Read the article

  • Go Big or Go Special

    - by Ajarn Mark Caldwell
    Watching Shark Tank tonight and the first presentation was by Mango Mango Preserves and it highlighted an interesting contrast in business trends today and how to capitalize on opportunities.  <Spoiler Alert> Even though every one of the sharks was raving about the product samples they tried, with two of them going for second and third servings, none of them made a deal to invest in the company.</Spoiler>  In fact, one of the sharks, Kevin O’Leary, kept ripping into the owners with statements to the effect that he thinks they are headed over a financial cliff because he felt their costs were way out of line and would be their downfall if they didn’t take action to radically cut costs. He said that he had previously owned a jams and jellies business and knew the cost ratios that you had to have to make it work.  I don’t doubt he knows exactly what he’s talking about and is 100% accurate…for doing business his way, which I’ll call “Go Big”.  But there’s a whole other way to do business today that would be ideal for these ladies to pursue. As I understand it, based on his level of success in various businesses and the fact that he is even in a position to be investing in other companies, Kevin’s approach is to go mass market (Go Big) and make hundreds of millions of dollars in sales (or something along that scale) while squeezing out every ounce of cost that you can to produce an acceptable margin.  But there is a very different way of making a very successful business these days, which is all about building a passionate and loyal community of customers that are rooting for your success and even actively trying to help you succeed by promoting your product or company (Go Special).  This capitalizes on the power of social media, niche marketing, and The Long Tail.  One of the most prolific writers about capitalizing on this trend is Seth Godin, and I hope that the founders of Mango Mango pick up a couple of his books (probably Purple Cow and Tribes would be good starts) or at least read his blog.  I think the adoration expressed by all of the sharks for the product is the biggest hint that they have a remarkable product and that they are perfect for this type of business approach. Both are completely valid business models, and it may certainly be that the scale at which Kevin O’Leary wants to conduct business where he invests his money is well beyond the long tail, but that doesn’t mean that there is not still a lot of money to be made there.  I wish them the best of luck with their endeavors!

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Google I/O 2010 - Keynote Day 1

    Google I/O 2010 - Keynote Day 1 Google I/O 2010 - Keynote Day 1 Video footage from Day 1 keynote at Google I/O 2010 Vic Gundotra, Engineering Vice President, Google Sundar Pichai, Vice President, Product Management, Google Charles Pritchard, Founder, MugTug Jim Lanzone, CEO, Clicker Mike Shaver, VP Engineering, Mozilla Corporation Håkon Wium Lie, CTO, Opera Software Kevin Lynch, CTO, Adobe Systems Terry McDonell, Editor, Sports Illustrated Group Lars Rasmussen, Manager, Google Wave David Glazer, Engineering Director, Google Paul Maritz, President & CEO, VMware Ben Alex, Senior Staff Engineer, SpringSource Division of VMware, Bruce Johnson, Engineering Director, Google Kevin Gibbs, Software Engineer, Google For all I/O 2010 sessions, please go to code.google.com From: GoogleDevelopers Views: 2 1 ratings Time: 02:05:08 More in Science & Technology

    Read the article

  • Azure Table Storage Creation using Nov 2009 CTP

    - by kaleidoscope
    The new SDK introduces a new class - · The CloudTableClient : This new class enables us to create tables and test for the existence of tables. We need not need use this class for querying table storage, it's   more of an administrative class for dealing with table storage itself.   · Once we have got the account key and the account name from ConfigurationSetting, we can create an instance of the storage credentials and table client classes:   StorageCredentialsAccountAndKey creds = new StorageCredentialsAccountAndKey(accountName, accountKey);     CloudTableClient tableStorage = new CloudTableClient(tableBaseUri, creds);     CustomerContext ctx = new CustomerContext(tableBaseUri, creds);     //where tableBaseUri is the TableStorageEndpoint obtained from ConfigurationSetting Using the table storage class, we can now create a new table (if it doesn't already exist):     if (tableStorage.CreateTableIfNotExist("Customers"))     {        CustomerRow cust = new CustomerRow("AccountsReceivable", "kevin");         cust.FirstName = "Kevin";        cust.LastName = "Hoffman";        ctx.AddObject("Customers", cust);        ctx.SaveChanges();     } For a complete article on this topic please follow this link: http://dotnetaddict.dotnetdevelopersjournal.com/azure_nov09_tablestorage.htm Tinu, O

    Read the article

  • Mac Port error installing gsoap

    - by Kevin
    Hi All, I have installed Mac Ports V1.8.1 no worries. I ran sudo port -v selfupdate no worries. I ran sudo port install gsoap And get the following error message. --- Computing dependencies for gsoap --- Fetching gsoap --- Attempting to fetch gsoap_2.7.13.tar.gz from http://optusnet.dl.sourceforge.net/gsoap2 --- Verifying checksum(s) for gsoap --- Extracting gsoap --- Applying patches to gsoap --- Configuring gsoap Error: Target org.macports.configure returned: configure failure: shell command " cd "/opt/local/var/macports/build/_opt_local_var_macports_sources_rsync.macports.org_release_ports_devel_gsoap/work/gsoap-2.7" && ./configure --prefix=/opt/local --enable-samples " returned error 77 Command output: checking for a BSD-compatible install... /usr/bin/install -c checking whether build environment is sane... yes checking for gawk... no checking for mawk... no checking for nawk... no checking for awk... awk checking whether make sets $(MAKE)... no checking build system type... i386-apple-darwin10.2.0 checking host system type... i386-apple-darwin10.2.0 checking whether make sets $(MAKE)... (cached) no checking for C++ compiler default output file name... configure: error: C++ compiler cannot create executables See `config.log' for more details. Error: Status 1 encountered during processing. Any ideas as to why it is failing. Regards Kevin

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • SharePoint Backup/Restore without stsadm

    - by Kevin
    Due to problems we found with the restore of sites/site collections using stsadm (our tasks generated from workflows were not restored), we've taken a different route for backup/restore. We plan a major customization to our SP site and want to take a backup so we can rollback in case the install fails. In our System Testing (not production) environment, we've backed up the 12 hive, the virtual dir's that the IIS points to SharePoint, and the SharePoint databases in SQL (using SQL server to do the db backups). We have custom event handlers and workflows built with Visual Studio, and deploy the dlls to the GAC as version 2 (signed and versioned in Visual Studio). So when we deploy, the GAC will contain 2 versions of the workflows - version 1 and version 2. During the deploy we use SP stsadm features to install/activate the WF's. We also go to each library and add the new, version 2 WFs. This automatically sets the version 1 WF's to "Not Allow" new instances (which is what we want) and the version 2 as active - perfect so far. When we've completed the install, we then assume a failure and attempt to restore to the same machines (SharePoint on one server, SQL on another). We start by uninstalling the version 2 WF's from the GAC, reset IIS (to clear cache of these ver. 2 WF dlls'), restore the 12-hive and virtual directory folders, then restore the SQL dbs. This is all just as manual as you read it - no stsadm here. All seems to work after our restore, it appears the restore was successful - the mods we made to column names, data changes, etc during the install are all reverted back to the original pre-install state. With one exception. When we run a workflow, it always fails and the Logs in the 12-hive indicates the WF is still trying to use the version 2 of the dll (System.IO file not found error) We think we've backed up and restored all the moving pieces of Sharepoint but we're missing something here, does anybody have any ideas why the version 2 WF dlls are still being referenced eventhough we restored all the folders and db's of SharePoint? Thanks, Kevin

    Read the article

  • MySQL -- enable connection to remote server via local /tmp/mysql.sock

    - by Kevin
    Hey all, I run a shared hosting provider and we're looking to move to a High Availability (replicated across multiple datacenters) setup for our hosting. We have created a replicated MySQL setup with failover that works wonderfully, and we'd like to move all of our clients' databases to it. The only trouble is that we have many many customers, all of whom have configured their Wordpress, Drupal, etc. installations to connect to MySQL via a local socket, not to the address of the remove server. I would hate to have to go through manually and change the connection statement in all of our clients' sites. What I'd ideally love to see is a program that listens on /tmp/mysql.sock and forwards connections there to the remote server I specify. I've seen SQL Relay, but it seems to require that I hardcode all of the database names and usernames and passwords into its configuration file. This is not going to work for me because our users add new databases dynamically all of the time, and I'd rather not have to write code to updated SQLRelay's config file every time. Does anyone have an idea on how to do this? Alternatively, I'd accept idea on how to handle this at the PHP level. (i.e. redirect any attempted calls to mysql_connect() to use that hostname rather than localhost) Thanks, Kevin

    Read the article

  • Domino nchronos.exe multiple instances causing server to die, and Sametime problems

    - by Kevin
    I've had this problem for a few months now. I thought it started when I installed the Traveller software on the server to add ActiveSync support, but I removed that and the problem still persists. Basically new instances of "nchronos.exe" keeps spawning (and not ending), so over a period of a few days the server eventually gets drowned in nchronos.exe processes, stops responding and I need to kill Domino. My process count the last time was up at about 330, and when I killed it and restarted the Domino my process count went to 160. I'm running Domino 8.5.1 with Fix Pack 2. I don't know if it's relevant, but my Domino server was also acting as a Sametime server. At around the same time that nchronos started playing up sametime also stopped working. None of my users can connect to sametime and in the domino log it keeps telling me "stpolicy.exe" has terminated. I've googled for that and tried a few things, but nothing seems to make sametime work again. Any thoughts?? Cheers, Kevin

    Read the article

  • Preventing h/w RAID cards from dropping slow JBOD disks

    - by Kevin
    I'm considering buying a used SAS h/w RAID card for externally attaching HDDs to an HP ProLiant I'm setting up. However, I only require RAID functionality on some of the drives. Theoretically it should be simple to JBOD the other drives, but some of them are inexpensive SATA disks and probably cannot have TLER disabled. I'd like to know, prior to actually ordering a RAID card, whether typically RAID cards would still enforce dropping of disks that do not respond within a few seconds, even if the disk is in a JBOD, and whether there is any way to disable this. Ideally it would be nice to be able to select certain SAS ports that will be pass-through, bypassing the RAID engine entirely and just acting as an HBA for those ports. I know I could buy a separate SAS HBA but that seems like a waste of $ and is also impractical as it's a 1U server so space is extremely limited. My question then is whether the functionality I'm looking for (pass-through on certain ports or at least JBOD drives not getting themselves dropped due to slow response) is typical of proper h/w RAID cards such as PERC 5/E etc. I've browsed through the latter's manual but unfortunately, as with most user manuals, it states the obvious and doesn't state the unobvious. Thanks for any info, Kevin

    Read the article

  • Postfix relay all mail through SES except for one sending domain / address

    - by Kevin
    I'm thinking this is really really super simple, but I can't figure out what I need to do. I don't mess with Postfix much (Just let it run and do its thing) so I've got no idea where to even start with this. We have postfix currently configured to relay all mail out through SES using the code below. We need to modify this so that emails sent from one of our domains (domain.com) DO NOT go through SES. Everything else should continue to flow out through the SES connection. I'm assuming this is like a one line thing but my google skills are not helping me at all. relayhost = email-smtp.us-east-1.amazonaws.com:25 smtp_sasl_auth_enable = yes smtp_sasl_security_options = noanonymous smtp_sasl_password_maps = hash:/etc/postfix/sasl_passwd smtp_use_tls = yes smtp_tls_security_level = encrypt smtp_tls_note_starttls_offer = yes smtp_tls_CAfile = /etc/ssl/certs/ca-certificates.crt smtp_destination_concurrency_limit = 450 Update I have created sender_transport file in /etc/postfix. In it is @domain.com smtp: I then ran this through postmap and placed sender_dependent_default_transport_maps = hash:/etc/postfix/sender_transport above the above block of code and restarted postfix, but still all email is going out through SES. Log after sending Oct 22 14:38:48 web postfix/smtp[19446]: 4B19D640002: to=<[email protected]>, relay=email-smtp.us-east-1.amazonaws.com[54.243.47.187]:25, delay=1.4, delays=0.01/0/0.92/0.44, dsn=2.0.0, status=sent (250 Ok 00000141e21b181f-ee6f7c4f-f0f5-4b0f-ba69-2db146a4f988-000000) Oct 22 14:38:48 web postfix/qmgr[19435]: 4B19D640002: removed I don't think this log is what you're looking for, but it's the only thing that is logged when mail goes out, and this is with me running /usr/sbin/postfix -v start manually and not with the init script.

    Read the article

  • How to handle JSON response using SBJSON iPhone?

    - by Jay Mehta
    I am receiving the below response from my web service? Can any one has idea how to handle it using SBJSON? { "match_details" : { "score" : 86-1 "over" : 1.1 "runrate" : 73.71 "team_name" : England "short_name" : ENG "extra_run" : 50 } "players" : { "key_0" : { "is_out" : 2 "runs" : 4 "balls" : 2 "four" : 1 "six" : 0 "batsman_name" : Ajmal Shahzad * "wicket_info" : not out } "key_1" : { "is_out" : 1 "runs" : 12 "balls" : 6 "four" : 2 "six" : 0 "batsman_name" : Andrew Strauss "wicket_info" : c. Kevin b.Kevin } "key_2" : { "is_out" : 2 "runs" : 20 "balls" : 7 "four" : 4 "six" : 0 "batsman_name" : Chris Tremlett * "wicket_info" : not out } } "fow" : { "0" : 40-1 } } I have done something like this:

    Read the article

  • jquery buttonset()

    - by Kevin
    Hi - I'm using the jquery buttonset() on a set of radio buttons (to tart them up). I'd like to be able to set the selected radio button on another user event. I've been looking into this and while I can set the selected radio button, but I cannot also (easily) update the UI to indicate what the selected radio button is. From what I can tell, I need to call this to set the radio button at index n to be checked $('input[name="transactionsRadio"]')[n].checked = true; And then do some convoluted jquery selector calls to remove the ui-state-active from one lable and apply it to the new label. Is this really the most optimal way to do this ? I had expected an equivalent method to the 'activate' method that is available for the jquery Accordian control. Any more elegant solution would be appreciated! Thanks, Kevin.

    Read the article

  • iPhone SDK mySQL

    - by Kevin
    Hello everybody, I'm starting on a new iPhone project, and this application mostly relies on mySQL. I have a mysql database running on my computer, and I need this application to send queries to the server to gain information. One example is creating and logging into an account. I have successfully done this on my windows vb.net application, but I know that objective c will be harder. This is basically getting text from the sql database and displaying it on the iPhone application. If someone could simply help me on that, I'll easily get started. So I hope that you guys help me out here :). Thanks Kevin

    Read the article

  • Is anyone else receiving a QUOTA_EXCEEDED_ERR on their iPad when accessing localStorage?

    - by Kevin
    I have a web application written in JavaScript that runs successfully on the desktop via Safari as well as on the iPhone. We are looking at porting this application to the iPad and we are running into a problem where we are seeing QUOTA_EXCEEDED_ERR when storing a relatively small amount of data within the localStorage on the device. I know what this error means, but I just don't think I'm storing all that much data. Is anyone else doing something similar? And seeing/not seeing this problem? Kevin...

    Read the article

  • URLCache - iPhone SDK

    - by Kevin
    Hello everyone.. I need some help in using the NSURLCache to download a simple file. It would need to be saved in the Documents folder of the app. I have seen the URLCache example that apple presented, but I didn't find it useful since it was an image that was being shown. In addition there was some complex code on the modification date. I actually need something when the application opens it will begin the download process. Can someone guide me through just the download process of any file? Thanks Kevin

    Read the article

  • NSData to NSString by changing the value null is returned. I need you help

    - by kevin
    *cipher.h, cipher.m all code : http://watchitlater.com/blog/2010/02/java-and-iphone-aes-interoperability Cipher.m -(NSData *)encrypt:(NSData *)plainText{ return [self transform:KCCEncrypt data:plainText; } step1. Cipher *cipher = [[Cipher alloc]initWithKey:@"1234567890"]; NSData *input = [@"kevin" dataUsingEncoding:NSUTF8StringEncoding]; NSData *data = [cipher encrypt:input]; data variables NSLog print : <4d1c4d7f 1592718c fd588cec 84053e35 step2. NSString *changeVal = [[NSString alloc] initWithData:data encoding:NSUTF8StringEncoding]; data variables NSLog print : null NSData to NSString by changing the value null is returned. By converting NSString NSURLConnection want to transfer. I need you help

    Read the article

< Previous Page | 1 2 3 4 5 6 7 8 9 10 11 12  | Next Page >