Search Results

Search found 17192 results on 688 pages for 'geeks with blogs'.

Page 122/688 | < Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >

  • Neo4J and Azure and VS2012 and Windows 8

    - by Chris Skardon
    Now, I know that this has been written about, but both of the main places (http://www.richard-banks.org/2011/02/running-neo4j-on-azure.html and http://blog.neo4j.org/2011/02/announcing-neo4j-on-windows-azure.html) utilise VS2010, and well, I’m on VS2012 and Windows 8. Not that I think Win 8 had anything to do with it really, anyhews! I’m going to begin from the beginning, this is my first foray into running something on Azure, so it’s been a bit of a learning curve. But luckily the Neo4J guys have got us started, so let’s download the VS2010 solution: http://neo4j.org/get?file=Neo4j.Azure.Server.zip OK, the other thing we’ll need is the VS2012 Azure SDK, so let’s get that as well: http://www.windowsazure.com/en-us/develop/downloads/ (I just did the full install). Now, unzip the VS2010 solution and let’s open it in VS2012: <your location>\Neo4j.Azure.Server\Neo4j.Azure.Server.sln One-way-upgrade? Yer! Ignore the migration report – we don’t care! Let’s build that sucker… Ahhh 14 errors… WindowsAzure does not exist in the namespace ‘Microsoft’ Not a problem right? We’ve installed the SDK, just need to update the references: We can ignore the Test projects, they don’t use Azure, we’re interested in the other projects, so what we’ll do is remove the broken references, and add the correct ones, so expand the references bit of each project: hunt out those yellow exclamation marks, and delete them! You’ll need to add the right ones back in (listed below), when you go to the ‘Add Reference’ dialog make sure you have ‘Assemblies’ and ‘Framework’ selected before you seach (and search for ‘microsoft.win’ to narrow it down) So the references you need for each project are: CollectDiagnosticsData Microsoft.WindowsAzure.Diagnostics Microsoft.WindowsAzure.StorageClient Diversify.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.CloudDrive Microsoft.WindowsAzure.ServiceRuntime Microsoft.WindowsAzure.StorageClient Right, so let’s build again… Sweet! No errors.   Now we need to setup our Blobs, I’m assuming you are using the most up-to-date Java you happened to have downloaded :) in my case that’s JRE7, and that is located in: C:\Program Files (x86)\Java\jre7 So, zip up that folder into whatever you want to call it, I went with jre7.zip, and stuck it in a temp folder for now. In that same temp folder I also copied the neo4j zip I was using: neo4j-community-1.7.2-windows.zip OK, now, we need to get these into our Blob storage, this is where a lot of stuff becomes unstuck - I didn’t find any applications that helped me use the blob storage, one would crash (because my internet speed is so slow) and the other just didn’t work – sure it looked like it had worked, but when push came to shove it didn’t. So this is how I got my files into Blob (local first): 1. Run the ‘Storage Emulator’ (just search for that in the start menu) 2. That takes a little while to start up so fire up another instance of Visual Studio in the mean time, and create a new Console Application. 3. Manage Nuget Packages for that solution and add ‘Windows Azure Storage’ Now you’re set up to add the code: public static void Main() { CloudStorageAccount cloudStorageAccount = CloudStorageAccount.DevelopmentStorageAccount; CloudBlobClient client = cloudStorageAccount.CreateCloudBlobClient(); client.Timeout = TimeSpan.FromMinutes(30); CloudBlobContainer container = client.GetContainerReference("neo4j"); //This will create it as well   UploadBlob(container, "jre7.zip", "c:\\temp\\jre7.zip"); UploadBlob(container, "neo4j-community-1.7.2-windows.zip", "c:\\temp\\neo4j-community-1.7.2-windows.zip"); }   private static void UploadBlob(CloudBlobContainer container, string blobName, string filename) { CloudBlob blob = container.GetBlobReference(blobName);   using (FileStream fileStream = File.OpenRead(filename)) blob.UploadFromStream(fileStream); } This will upload the files to your local storage account (to switch to an Azure one, you’ll need to create a storage account, and use those credentials when you make your CloudStorageAccount above) To test you’ve got them uploaded correctly, go to: http://localhost:10000/devstoreaccount1/neo4j/jre7.zip and you will hopefully download the zip file you just uploaded. Now that those files are there, we are ready for some final configuration… Right click on the Neo4jServerHost role in the Neo4j.Azure.Server cloud project: Click on the ‘Settings’ tab and we’ll need to do some changes – by default, the 1.7.2 edition of neo4J unzips to: neo4j-community-1.7.2 So, we need to update all the ‘neo4j-1.3.M02’ directories to be ‘neo4j-community-1.7.2’, we also need to update the Java runtime location, so we start with this: and end with this: Now, I also changed the Endpoints settings, to be HTTP (from TCP) and to have a port of 7410 (mainly because that’s straight down on the numpad) The last ‘gotcha’ is some hard coded consts, which had me looking for ages, they are in the ‘ConfigSettings’ class of the ‘Neo4jServerHost’ project, and the ones we’re interested in are: Neo4jFileName JavaZipFileName Change those both to what that should be. OK Nearly there (I promise)! Run the ‘Compute Emulator’ (same deal with the Start menu), in your system tray you should have an Azure icon, when the compute emulator is up and running, right click on the icon and select ‘Show Compute Emulator UI’ The last steps! Make sure the ‘Neo4j.Azure.Server’ cloud project is set up as the start project and let’s hit F5 tension mounts, the build takes place (you need to accept the UAC warning) and VS does it’s stuff. If you look at the Compute Emulator UI you’ll see some log stuff (which you’ll need if this goes awry – but it won’t don’t worry!) In a bit, the console and a Java window will pop up: Then the console will bog off, leaving just the Java one, and if we switch back to the Compute Emulator UI and scroll up we should be able to see a line telling us the port number we’ve been assigned (in my case 7411): (If you can’t see it, don’t worry.. press CTRL+A on the emulator, then CTRL+C, copy all the text and paste it into something like Notepad, then just do a Find for ‘port’ you’ll soon see it) Go to your favourite browser, and head to: http://localhost:YOURPORT/ and you should see the WebAdmin! See you on the cloud side hopefully! Chris PS Other gotchas! OK, I’ve been caught out a couple of times: I had an instance of Neo4J running as a service on my machine, the Azure instance wanted to run the https version of the server on the same port as the Service was running on, and so Java would complain that the port was already in use.. The first time I converted the project, it didn’t update the version of the Azure library to load, in the App.Config of the Neo4jServerHost project, and VS would throw an exception saying it couldn’t find the Azure dll version 1.0.0.0.

    Read the article

  • Using HTML5 Today part 2&ndash;Fixing Semantic tags with a Shiv

    - by Steve Albers
    Semantic elements and the Shiv! This is the second entry in the series of demos from the “Using HTML5 Today” talk. For the definitive discussion on unknown elements and the HTML5 Shiv check out Mark Pilgrim’s Dive Into HTML5 online book at http://diveintohtml5.info/semantics.html#unknown-elements Semantic tags increase the meaning and maintainability of your markup, help make your page more computer-readable, and can even provide opportunities for libraries that are written to automagically enhance content using standard tags like <nav>, <header>,  or <footer>. Legacy IE issues However, new HTML5 tags get mangled in IE browsers prior to version 9.  To see this in action, consider this bit of HTML code which includes the new <article> and <header> elements: Viewing this page using the IE9 developer tools (F12) we see that the browser correctly models the hierarchy of tags listed above: But if we switch to IE8 Browser Mode in developer tools things go bad: Did you know that a closing tag could close itself?? The browser loses the hierarchy & closes all of the new tags.  The new tags become unusable and the page structure falls apart. Additionally block-level elements lose their block status, appearing as inline.    The Fix (good) The block-level issue can be resolved by using CSS styling.  Below we set the article, header, and footer tags as block tags. article, header, footer {display:block;} You can avoid the unknown element issue by creating a version of the element in JavaScript before the actual HTML5 tag appears on the page: <script> document.createElement("article"); document.createElement("header"); document.createElement("footer"); </script> The Fix (better) Rather than adding your own JS you can take advantage of a standard JS library such as Remy Sharp’s HTML5 Shiv at http://code.google.com/p/html5shiv/.  By default the Modernizr library includes HTML5 Shiv, so you don’t need to include the shiv code separately if you are using Modernizr.

    Read the article

  • Slides and Pictures from PowerShell Saturday Columbus 2012

    - by Brian Jackett
    On March 10th, 2012 the first ever PowerShell Saturday conference took place in Columbus, OH and I couldn’t be happier with the outcome.  We had 100 attendees from 10 different states (the biggest surprise to me) come to see 6 speakers present on a variety of PowerShell topics: introduction, WMI, SharePoint, Active Directory, Exchange, 3rd party products and more.      A big thank you also goes out to a number of people. Planning committee Wes Stahler, lead organizer of PowerShell Saturday Columbus, president of Central Ohio PowerShell User Group Ed “Microsoft Scripting Guy” Wilson Teresa “The Scripting Wife” Wilson Ashley McGlone Brian T. Jackett (myself) Speakers Ed Wilson Ashley McGlone James Brundage Trevor Sullivon Daniel Cruz Volunteer Lisa Gardner, fellow Microsoft PFE volunteered her time on a Saturday to assist with smooth operation of the day Facility Coordination Debbie Carrier, facilities coordinator for the Columbus Microsoft Office and helped us out greatly with the venue   Slides and Script Samples    I presented my session on “PowerShell for the SharePoint 2010 Developer”.  Below you can download the slides and script samples.   Photos    I wasn’t able to take took many pictures (only 3) as I was busy doing my presentation, answering questions, and taking care of random items throughout the day.   Pictures on Facebook    click here Pictures on SkyDrive (higher res) PowerShell Saturday Columbus Mar '12 VIEW SLIDE SHOW DOWNLOAD ALL   Conclusion    I’m very happy that this first ever PowerShell Saturday was a success.  My fellow PFE and speaker Ashley McGlone also has a short write-up on his blog about the event (click here).  I have heard rumors that there are other cities starting to plan their own local events.  When I hear more details I’ll spread the word here and on Twitter.         -Frog Out

    Read the article

  • Visual Studio 2010 and .NET Framework 4 Training Kit

    - by Jim Duffy
    Now that you’ve had time to download and install Visual Studio 2010 its time to start learning about all the new features and capabilities. That’s where this post comes in. Microsoft released the Visual Studio 2010 and .NET Framework 4 Training Kit on the same day Visual Studio 2010 became available to download. It contains presentations, hands-on labs, and demos on a variety of features and framework technologies including: C# 4 Visual Basic 10 F# Parallel Extensions Windows Communication Foundation Windows Workflow Windows Presentation Foundation ASP.NET 4 Windows 7 Entity Framework ADO.NET Data Services Managed Extensibility Framework Visual Studio Team System As you can see the Developer & Platform Evangelism group has gone the extra mile to make sure you have the resources you need to fully leverage the power of Microsoft’s latest version of Visual Studio. Have a day. :-|

    Read the article

  • How Visual Studio 2010 and Team Foundation Server enable Compliance

    - by Martin Hinshelwood
    One of the things that makes Team Foundation Server (TFS) the most powerful Application Lifecycle Management (ALM) platform is the traceability it provides to those that use it. This traceability is crucial to enable many companies to adhere to many of the Compliance regulations to which they are bound (e.g. CFR 21 Part 11 or Sarbanes–Oxley.)   From something as simple as relating Tasks to Check-in’s or being able to see the top 10 files in your codebase that are causing the most Bugs, to identifying which Bugs and Requirements are in which Release. All that information is available and more in TFS. Although all of this tradability is available within TFS you do need to understand that it is not for free. Well… I say that, but if you are using TFS properly you will have this information with no additional work except for firing up the reporting. Using Visual Studio ALM and Team Foundation Server you can relate every line of code changes all the way up to requirements and back down through Test Cases to the Test Results. Figure: The only thing missing is Build In order to build the relationship model below we need to examine how each of the relationships get there. Each member of your team from programmer to tester and Business Analyst to Business have their roll to play to knit this together. Figure: The relationships required to make this work can get a little confusing If Build is added to this to relate Work Items to Builds and with knowledge of which builds are in which environments you can easily identify what is contained within a Release. Figure: How are things progressing Along with the ability to produce the progress and trend reports the tractability that is built into TFS can be used to fulfil most audit requirements out of the box, and augmented to fulfil the rest. In order to understand the relationships, lets look at each of the important Artifacts and how they are associated with each other… Requirements – The root of all knowledge Requirements are the thing that the business cares about delivering. These could be derived as User Stories or Business Requirements Documents (BRD’s) but they should be what the Business asks for. Requirements can be related to many of the Artifacts in TFS, so lets look at the model: Figure: If the centre of the world was a requirement We can track which releases Requirements were scheduled in, but this can change over time as more details come to light. Figure: Who edited the Requirement and when There is also the ability to query Work Items based on the History of changed that were made to it. This is particularly important with Requirements. It might not be enough to say what Requirements were completed in a given but also to know which Requirements were ever assigned to a particular release. Figure: Some magic required, but result still achieved As an augmentation to this it is also possible to run a query that shows results from the past, just as if we had a time machine. You can take any Query in the system and add a “Asof” clause at the end to query historical data in the operational store for TFS. select <fields> from WorkItems [where <condition>] [order by <fields>] [asof <date>] Figure: Work Item Query Language (WIQL) format In order to achieve this you do need to save the query as a *.wiql file to your local computer and edit it in notepad, but one imported into TFS you run it any time you want. Figure: Saving Queries locally can be useful All of these Audit features are available throughout the Work Item Tracking (WIT) system within TFS. Tasks – Where the real work gets done Tasks are the work horse of the development team, but they only as useful as Excel if you do not relate them properly to other Artifacts. Figure: The Task Work Item Type has its own relationships Requirements should be broken down into Tasks that the development team work from to build what is required by the business. This may be done by a small dedicated group or by everyone that will be working on the software team but however it happens all of the Tasks create should be a Child of a Requirement Work Item Type. Figure: Tasks are related to the Requirement Tasks should be used to track the day-to-day activities of the team working to complete the software and as such they should be kept simple and short lest developers think they are more trouble than they are worth. Figure: Task Work Item Type has a narrower purpose Although the Task Work Item Type describes the work that will be done the actual development work involves making changes to files that are under Source Control. These changes are bundled together in a single atomic unit called a Changeset which is committed to TFS in a single operation. During this operation developers can associate Work Item with the Changeset. Figure: Tasks are associated with Changesets   Changesets – Who wrote this crap Changesets themselves are just an inventory of the changes that were made to a number of files to complete a Task. Figure: Changesets are linked by Tasks and Builds   Figure: Changesets tell us what happened to the files in Version Control Although comments can be changed after the fact, the inventory and Work Item associations are permanent which allows us to Audit all the way down to the individual change level. Figure: On Check-in you can resolve a Task which automatically associates it Because of this we can view the history on any file within the system and see how many changes have been made and what Changesets they belong to. Figure: Changes are tracked at the File level What would be even more powerful would be if we could view these changes super imposed over the top of the lines of code. Some people call this a blame tool because it is commonly used to find out which of the developers introduced a bug, but it can also be used as another method of Auditing changes to the system. Figure: Annotate shows the lines the Annotate functionality allows us to visualise the relationship between the individual lines of code and the Changesets. In addition to this you can create a Label and apply it to a version of your version control. The problem with Label’s is that they can be changed after they have been created with no tractability. This makes them practically useless for any sort of compliance audit. So what do you use? Branches – And why we need them Branches are a really powerful tool for development and release management, but they are most important for audits. Figure: One way to Audit releases The R1.0 branch can be created from the Label that the Build creates on the R1 line when a Release build was created. It can be created as soon as the Build has been signed of for release. However it is still possible that someone changed the Label between this time and its creation. Another better method can be to explicitly link the Build output to the Build. Builds – Lets tie some more of this together Builds are the glue that helps us enable the next level of tractability by tying everything together. Figure: The dashed pieces are not out of the box but can be enabled When the Build is called and starts it looks at what it has been asked to build and determines what code it is going to get and build. Figure: The folder identifies what changes are included in the build The Build sets a Label on the Source with the same name as the Build, but the Build itself also includes the latest Changeset ID that it will be building. At the end of the Build the Build Agent identifies the new Changesets it is building by looking at the Check-ins that have occurred since the last Build. Figure: What changes have been made since the last successful Build It will then use that information to identify the Work Items that are associated with all of the Changesets Changesets are associated with Build and change the “Integrated In” field of those Work Items . Figure: Find all of the Work Items to associate with The “Integrated In” field of all of the Work Items identified by the Build Agent as being integrated into the completed Build are updated to reflect the Build number that successfully integrated that change. Figure: Now we know which Work Items were completed in a build Now that we can link a single line of code changed all the way back through the Task that initiated the action to the Requirement that started the whole thing and back down to the Build that contains the finished Requirement. But how do we know wither that Requirement has been fully tested or even meets the original Requirements? Test Cases – How we know we are done The only way we can know wither a Requirement has been completed to the required specification is to Test that Requirement. In TFS there is a Work Item type called a Test Case Test Cases enable two scenarios. The first scenario is the ability to track and validate Acceptance Criteria in the form of a Test Case. If you agree with the Business a set of goals that must be met for a Requirement to be accepted by them it makes it both difficult for them to reject a Requirement when it passes all of the tests, but also provides a level of tractability and validation for audit that a feature has been built and tested to order. Figure: You can have many Acceptance Criteria for a single Requirement It is crucial for this to work that someone from the Business has to sign-off on the Test Case moving from the  “Design” to “Ready” states. The Second is the ability to associate an MS Test test with the Test Case thereby tracking the automated test. This is useful in the circumstance when you want to Track a test and the test results of a Unit Test designed to test the existence of and then re-existence of a a Bug. Figure: Associating a Test Case with an automated Test Although it is possible it may not make sense to track the execution of every Unit Test in your system, there are many Integration and Regression tests that may be automated that it would make sense to track in this way. Bug – Lets not have regressions In order to know wither a Bug in the application has been fixed and to make sure that it does not reoccur it needs to be tracked. Figure: Bugs are the centre of their own world If the fix to a Bug is big enough to require that it is broken down into Tasks then it is probably a Requirement. You can associate a check-in with a Bug and have it tracked against a Build. You would also have one or more Test Cases to prove the fix for the Bug. Figure: Bugs have many associations This allows you to track Bugs / Defects in your system effectively and report on them. Change Request – I am not a feature In the CMMI Process template Change Requests can also be easily tracked through the system. In some cases it can be very important to track Change Requests separately as an Auditor may want to know what was changed and who authorised it. Again and similar to Bugs, if the Change Request is big enough that it would require to be broken down into Tasks it is in reality a new feature and should be tracked as a Requirement. Figure: Make sure your Change Requests only Affect Requirements and not rewrite them Conclusion Visual Studio 2010 and Team Foundation Server together provide an exceptional Application Lifecycle Management platform that can help your team comply with even the harshest of Compliance requirements while still enabling them to be Agile. Most Audits are heavy on required documentation but most of that information is captured for you as long a you do it right. You don’t even need every team member to understand it all as each of the Artifacts are relevant to a different type of team member. Business Analysts manage Requirements and Change Requests Programmers manage Tasks and check-in against Change Requests and Bugs Testers manage Bugs and Test Cases Build Masters manage Builds Although there is some crossover there are still rolls or “hats” that are worn. Do you thing this is all achievable? Have I missed anything that you think should be there?

    Read the article

  • Did Microsoft designers got their butts kicked 3 years ago?

    - by John Conwell
    This is something I've been wondering about for about a year now.  Microsoft has a history of creating very useful products, with lots of useful features.  But useful does not mean usable.  A lot of stuff coming out of Redmond the past 10 years don't really seem to have been well thought out from a user design point of view.  Lots of extra steps, lots of popup windows...very little innovative thinking going on about the user experience of these products.But about a year ago I started seeing changes in the new products coming out of Microsoft.  Windows 7 is a good example of a big change.  They really got their asses handed to them on Vista, so they had to make a change.  But it looks like this change in philosophy has bled over to other areas.  The new Office (2010) lineup has a lot of changes in it to make it way more usable. Given that big changes like this take about 3 years to go from start to actually shipping product, I'm curious what happened internally at Microsoft that really drove this change in product design.  I think that Microsoft got so focused on just adding new functionality for so long, they forgot about the little things that can really make or break a product.  Office 2010 is full of these little things that make it much nicer to use.  I just hope its not too late for them.

    Read the article

  • KERPOOOOW!

    - by Matt Christian
    Recently I discovered the colorful world of comic books.  In the past I've read comics a few times but never really got into them.  When I wanted to start a collection I decided either video games or comics yet stayed away from comics because I am less familiar with them. In any case, I stopped by my local comic shop and picked up a few comics and a few trade paperbacks.  After reading them and understanding their basic flow I began to enjoy not only the stories but the art styles hiding behind those little white bubbles of text (well, they're USUALLY white).  My first stop at the comic store I ended up with: - Nemesis #1 (cover A) - Shuddertown #1 (cover A I think) - Daredevil: King of Hell's Kitchen Trade Paperback - Peter Parker: Spiderman - One Small Break Trade Paperback It took me about 3-4 days to read all of that including re-reading the single issues and glancing over the beginning of Daredevil again.  After a week of looking around online I knew a little more about the comics I wanted to pick up and the kind of art style I enjoyed.  While Peter Parker: Spiderman was ok, I really enjoyed the detailed, realistic look of Daredevil and Shuddertown. Now, a few years back I picked up the game The Darkness for PS3.  I knew it was based off a comic but never read the comic.  I decided I'd pick up a few issues of it and ended up with: - The Darkness #80 (cover A) - The Darkness #81 (cover A) - The Darkness #82 (cover A) - The Darkness #83 (cover A) - The Darkness Shadows and Flame #1  (one-shot; cover A) - The Darkness Origins: Volume 1 Trade Paperback (contains The Darkness #1-6) - New Age boards and bags for storing my comics The Darkness is relatively good though jumping from issue #6 to issue #80 I lost a bit on who the enemy in the current series is.  I think out of all of them, issue #83 was my favorite of them. I'm signed up at the local shop to continue getting Nemesis, The Darkness, and Shuddertown, and I'll probably pick up a few different ones this weekend...

    Read the article

  • sublime text 2 review !

    - by Anirudha
    Few months ago I am looking for a editor to doing simply edit on html,css,javascript.  Before it I tried notepad++ which is quite awesome to do all works I want to get done with him.  I choose 2 editor on my list. first is sublime text and second is phpstorm. both are cross-plateform. I tried both and both is working fine. I finally go with sublime text 2.   Here is the reason why sublime text 2 is awesome. phpstorm and sublime text 2 both is licensed  software. In sublime text 2 you can use it for unlimited time when phpstorm is available for 30 days only. Sublime text 2 is very memory efficient and lightweight software this is first thing people found best in sublime text 2. in phpstorm problem for me is sometime it’s goes unresponsive when I tried html5 boilerplate. sublime text 2 is never hang depend on memory size of project compare to phpstorm. in Sublime text 2 you can got better speed at coding after learning some shortcut and basic thing applied specially sublime text 2. Sublime text 2 come with distraction free mode when phpstorm have nothing with full-screen. Sublime text 2 support almost every language. I have seen many people in community who has move from their PHP IDE to sublime text 2. You can use LESS and coffeescript in it. There are many kind of customization out in github regarding sublime text 2.   In past I also have also tried webmatrix. the latest version of webmatrix have nothing good as sublime text 2. Sublime text 2 is best fit for my requirement.   So cheers, people should tried once Sublime text 2 if they are look for a solid tool for learning new things. sublime text 2 can be downloaded from http://www.sublimetext.com/.   Thanks for reading my post.

    Read the article

  • System Center Service Manager 2010 SP1 Resource Links

    - by Mickey Gousset
    System Center Service Manager is a new product that Microsoft released last year to handle incident/problem/change management.  Currently the latest version is System Center Service Manager SP1, and there is a Cumulative Update for SP1 that you should grab as well. A strong ecosystem is starting to spring up around this product, with tools and connectors that fill needs not build into the product.  To find the latest list of these items, you need to do to the SCSM 2010 Downloads page.  Here you can find a list of the latest tools and add-ons from Microsoft, as well as third-party vendors.  The Microsoft Exchange connector, and the Powershell Cmdlets are definitely worth it to download and install.

    Read the article

  • Using OData to get Mix10 files

    - by Jon Dalberg
    There has been a lot of talk around OData lately (go to odata.org for more information) and I wanted to get all the videos from Mix ‘10: two great tastes that taste great together. Luckily, Mix has exposed the ‘10 sessions via OData at http://api.visitmix.com/OData.svc, now all I have to do is slap together a bit of code to fetch the videos. Step 1 (cut a hole in the box) Create a new console application and add a new service reference. Step 2 (put your junk in the box) Write a smidgen of code: 1: static void Main(string[] args) 2: { 3: var mix = new Mix.EventEntities(new Uri("http://api.visitmix.com/OData.svc")); 4:   5: var files = from f in mix.Files 6: where f.TypeName == "WMV" 7: select f; 8:   9: var web = new WebClient(); 10: 11: var myVideos = Path.Combine(Environment.GetFolderPath(Environment.SpecialFolder.MyVideos), "Mix10"); 12:   13: Directory.CreateDirectory(myVideos); 14:   15: files.ToList().ForEach(f => { 16: var fileName = new Uri(f.Url).Segments.Last(); 17: Console.WriteLine(f.Url); 18: web.DownloadFile(f.Url, Path.Combine(myVideos, fileName)); 19: }); 20: } Step 3 (have her open the box) Compile and run. As you can see, the client reference created for the OData service handles almost everything for me. Yeah, I know there is some batch file to download the files, but it relies on cUrl being on the machine – and I wanted an excuse to work with an OData service. Enjoy!

    Read the article

  • South Florida .Net Code Camp - February 12th, 2011

    - by Sam Abraham
    Later this week, I will be heading to our annual South Florida .Net Code Camp, an all-day free “Geek Fest” taking place on February 12th, 2011.This year’s code camp will be conveniently taking place at Nova Southeastern University in Ft Lauderdale.   With more than 700 already registered, this year’s event is bound to exceed last year’s registration and attendance. We are also fortunate to have secured the backing of a large number of our kind sponsors, supporters and volunteers, with our efforts led by our chief organizer, Fladotnet founder and Microsoft MVP, Dave Noderer.   As a member of the volunteer organizing team, I have gotten a good exposure on what it takes to run a code camp and gotten to appreciate the tremendous amount of work such a large event takes to put together to handle logistics such as venue, food, speaker registration and scheduling, website updates; that of course in addition to the essential outreach efforts necessary to secure sponsorships.   As Dave puts it, Code Camp is a great venue for those who want to gain exposure and experience as technical speakers to try it out just as much as it being a forum for experienced speakers to share the latest on their topics of interest. So far, 65 speakers are already scheduled to speak, bringing us an array of diverse topics.   I will be speaking on ASP.Net MVC3, the Razor view engine and present a brief introduction to NUGet. Below is a brief abstract on the session. For more information on code camp and to regsiter, please visit http://www.fladotnet.com/codecamp/Default.aspx   Hope to see you there!   Diving into ASP.Net MVC 3 and the Razor View Engine The first few minutes of this session will bring those who might not have previously used or learned about MVC up to speed with the necessary rules and conventions for an MVC project. We will then cover the latest additions to ASP.Net MVC 3 and discuss the value it brings with its new Razor View Engine and the various project template improvements made in Visual Studio 2010. We will also explore how to leverage both Razor and ASPX View Engines in one project. Audience participation is strongly encouraged and will be solicited.

    Read the article

  • Gems In The Visual Studio 2010 Training Kit - Introduction to ASP.NET MVC: Learning Labs

    - by Jim Duffy
    Following up on my prior “gems post” is another nugget I found in the Visual Studio 2010 and .NET Framework 4 Training Kit. ASP.NET MVC has established quite a bit of momentum in the ASP.NET development community since it was introduced in early-ish 2009 though I’m sure there are many developers who haven’t had the time or opportunity to find out what it is, not to mention learn how to use it. If you’re one of those “I’ve heard of it but I’m not sure what it really is” developers then I suggest you start your research here. Ok, back to the gem. There are a number of fantastic MVC learning resources out there including the video tutorials on the ASP.NET MVC website. Another learning resource for your journey along the yellow brick road into ASP.NET MVC land are the hands-on learning labs contained in the Visual Studio 2010 and .NET Framework 4 Training Kit. These hands-on exercises walk you through the process of creating the “M”, the “V”s, and the “C”s of ASP.NET MVC and help you gain a solid foothold into the details of creating and understanding ASP.NET MVC applications. Have a day. :-|

    Read the article

  • June 2010 Chicago Architects Group Meeting

    - by Tim Murphy
    The Chicago Architects Group will be holding its next meeting on June 15th.  Please come and join us and get involved in our architect community. Register Presenter: Tim Murphy  Topic: Document Generation Architectures        Location: TechNexus 200 S. Wacker Dr., Suite 1500 Room A/B Chicago, IL 60606 Time: 5:30 - Doors open at 5:00 Sponsored by: del.icio.us Tags: Chicago Architects Group,Azure,Scott Seely

    Read the article

  • PowerShell &ndash; Recycle All IIS App Pools

    - by Lance Robinson
    With a little help from Shay Levy’s post on Stack Overflow and the MSDN documentation, I added this handy function to my profile to automatically recycle all IIS app pools.           function Recycle-AppPools {     param(     [string] $server = "3bhs001",     [int] $mode = 1, # ManagedPipelineModes: 0 = integrated, 1 = classic     )  $iis = [adsi]"IIS://$server/W3SVC/AppPools" $iis.psbase.children | %{ $pool = [adsi]($_.psbase.path);    if ($pool.AppPoolState -eq 2 -and $pool.ManagedPipelineMode -eq $mode) {    # AppPoolStates:  1 = starting, 2 = started, 3 = stopping, 4 = stopped               $pool.psbase.invoke("recycle")      }   }}

    Read the article

  • Autocomplete in Silverlight with Visual Studio 2010

    - by Sayre Collado
    Last week I keep searching on how to use the autocomplete in silverligth with visual studio 2010 but most of the examples that I find they are using a textbox or combobox for the autocomplete. I tried to study those examples and apply to the single autocomplete from tools on my silverlight project. And now this is the result. I will use a database again from my previous post (Silverlight Simple DataBinding in DataGrid) to show how the autocomplete works with database. This is the output: First, this is the setup for my autocomplete: //The tags for autocompletebox on XAML Second, my simple snippets: //Event for the autocomplete to send a text string to my function private void autoCompleteBox1_KeyUp(object sender, KeyEventArgs e) { autoCompleteBox1.Populating += (s, args) => { args.Cancel = true; var c = new Service1Client(); c.GetListByNameCompleted +=new EventHandler(c_GetListByNameCompleted); c.GetListByNameAsync(autoCompleteBox1.Text); }; } //Getting result from database void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); } The snippets above will show on how to use the autocompleteBox using the data from database that bind in DataGrid. But what if we want to show the result on DataGrid while the autocomplete changing the items source? Ok just add one line to c_GetListByNameCompleted void c_GetListByNameCompleted(object sender, GetListByNameCompletedEventArgs e) { autoCompleteBox1.ItemsSource = e.Result; autoCompleteBox1.PopulateComplete(); dataGrid1.ItemsSource = e.Result; }

    Read the article

  • BizTalk 2009 - Custom Functoid Categories

    - by StuartBrierley
    I recently had cause to code a number of custom functoids to aid with some maps that I was writing. Once these were developed and deployed to C:\Program Files\Microsoft BizTalk Server 2009\Developer Tools\Mapper Extensions a quick refresh allowed them to appear in toolbox.  After dropping these on a map and configuring the appropriate inputs I tested the map to check that they worked as expected.  All but one of the functoids worked as expecetd, but the final functoid appeared not to be firing at all. I had already tested the code used in a simple test harness application, so I was confident in the code used, but I still needed to figure out what the problem might be. Debugging the map helped me on the way; for some reason the functoid in question was not shown correctly - the functoid definition was wrong. After some investigations I found that the functoid type you assign when coding a custom functoid affects more than just the category it appears in; different functoid types have different capabilities, including what they can link too.  For example, a logical functoid can not provide content for an output element, it can only say whether the element exists.  Map this via a Value Mapping functoid and the value of true or false can be seen in the output element. The functoid I was having problems with was one whare I had used the XPath functoid type, this had seemed to be a good fit as I was looking up content in a config file using xpath and I wanted it to appear the advanced area.  From the table below you can see that this functoid type is marked as "Internal Only", preventing it from being used for custom functoids.  Changing my type to String allowed the functoid to function as expected. Category Description Toolbox Group Assert Internal Use Only Advanced Conversion Converts characters to and from numerics and converts numbers from one base to another. Conversion Count Internal Use Only Advanced Cumulative Performs accumulations of the value of a field that occurs multiple times in a source document and outputs a single output. Cumulative DatabaseExtract Internal Use Only Database DatabaseLookup Internal Use Only Database DateTime Adds date, time, date and time, or add days to a specified date, in output data. Date/Time ExistenceLooping Internal Use Only Advanced Index Internal Use Only Advanced Iteration Internal Use Only Advanced Keymatch Internal Use Only Advanced Logical Controls conditional behavior of other functoids to determine whether particular output data is created. Logical Looping Internal Use Only Advanced MassCopy Internal Use Only Advanced Math Performs specific numeric calculations such as addition, multiplication, and division. Mathematical NilValue Internal Use Only Advanced Scientific Performs specific scientific calculations such as logarithmic, exponential, and trigonometric functions. Scientific Scripter Internal Use Only Advanced String Manipulates data strings by using well-known string functions such as concatenation, length, find, and trim. String TableExtractor Internal Use Only Advanced TableLooping Internal Use Only Advanced Unknown Internal Use Only Advanced ValueMapping Internal Use Only Advanced XPath Internal Use Only Advanced Links http://msdn.microsoft.com/en-us/library/microsoft.biztalk.basefunctoids.functoidcategory(BTS.20).aspx http://blog.eliasen.dk/CommentView,guid,d33b686b-b059-4381-a0e7-1c56e808f7f0.aspx

    Read the article

  • PostSharp deployment to build machine- use Setup installation, not NuGet package.

    - by Michael Freidgeim
    PostSharp has well documented different methods of installation. I've chosen installing NuGet packages, because according to  Deploying PostSharp into a Source Repository NuGet is the easiest way to add PostSharp to a project without installing the product on every machine. However it didn't work well for me. I've added PostSharp NuGet package to one project in the solution.  When I wanted to use PostSharp in other project, Visual Studio tab showed that PostSharp is not enabled for this project I've added the NuGet package to the new project, which installed a new version of the package in the new Packages subfolder. When I wanted to refer PostSharp from the third project, I've ended up with another version of PostSharp installed. Additionally multiple versions of Diagnostics were created. It definitely causes confusion and errors.   More problems we experienced on build server. According to Using PostSharp on a Build Server "If you chose to deploy PostSharp in the source repository, it does not need to be installed specifically on the build server. " It didn't work on our build server. I kept getting errors "The "AddIns" parameter is not supported by the "PostSharp21" task." and "The "DisableSystemBindingPolicies" parameter is not supported by the "PostSharp21" task."   From my experience the only way to have the latest version of PostSharp working on the build server is to install it using Setup as described in Deploying PostSharp with the Setup Program     Gael acknowledged the issues with possible version conflicts. see http://support.sharpcrafters.com/discussions/problems/388-the-postsharp21-task-failed-unexpectedly

    Read the article

  • Atlanta Code Camp 2012

    - by SURESH GIRIRAJAN
    It was really exciting weekend at Atlanta code camp 2012. This was my first code camp, I presented on “Windows Kinect for Enterprise”. Walked through couple of demos, how we can integrate Kinect with WPF application. One of the demos I walked thorough how you can integrate Kinect with Microsoft Lync 2010 and other one on Car console app.   You can check the uploaded code here. I appreciate all the folks attended my session and thanks for all the organizers. Windows kinect for enterprise View more PowerPoint from sureshgiriraja

    Read the article

  • Working with packed dates in SSIS

    - by Jim Giercyk
    One of the challenges recently thrown my way was to read an EBCDIC flat file, decode packed dates, and insert the dates into a SQL table.  For those unfamiliar with packed data, it is a way to store data at the nibble level (half a byte), and was often used by mainframe programmers to conserve storage space.  In the case of my input file, the dates were 2 bytes long and  represented the number of days that have past since 01/01/1950.  My first thought was, in the words of Scooby, Hmmmmph?  But, I love a good challenge, so I dove in. Reading in the flat file was rather simple.  The only difference between reading an EBCDIC and an ASCII file is the Code Page option in the connection manager.  In my case, I needed to use Code Page 1140 for EBCDIC (I could have also used Code Page 37).       Once the code page is set correctly, SSIS can understand what it is reading and it will convert the output to the default code page, 1252.  However, packed data is either unreadable or produces non-alphabetic characters, as we can see in the preview window.   Column 1 is actually the packed date, columns 0 and 2 are the values in the rest of the file.  We are only interested in Column 1, which is a 2 byte field representing a packed date.  We know that 2 bytes of packed data can be stored in 1 byte of character data, so we are working with 4 packed digits in 2 character bytes.  If you are confused, stay tuned….this will make sense in a minute.   Right-click on your Flat File Source shape and select “Show Advanced Editor”. Here is where the magic begins. By changing the properties of the output columns, we can access the packed digits from each byte. By default, the Output Column data type is DT_STR. Since we want to look at the bytes individually and not the entire string, change the data type to DT_BYTES. Next, and most important, set UseBinaryFormat to TRUE. This will write the HEX VALUES of the output string instead of writing the character values.  Now we are getting somewhere! Next, you will need to use a Data Conversion shape in your Data Flow to transform the 2 position byte stream to a 4 position Unicode string containing the packed data.  You need the string to be 4 bytes long because it will contain the 4 packed digits.  Here is what that should look like in the Data Conversion shape: Direct the output of your data flow to a test table or file to see the results.  In my case, I created a test table.  The results looked like this:     Hold on a second!  That doesn't look like a date at all.  No, of course not.  It is a hex number which represents the days which have passed between 01/01/1950 and the date.  We have to convert the Hex value to a decimal value, and use the DATEADD function to get a date value.  Luckily, I have created a function to convert Hex to Decimal:   -- ============================================= -- Author:        Jim Giercyk -- Create date: March, 2012 -- Description:    Converts a Hex string to a decimal value -- ============================================= CREATE FUNCTION [dbo].[ftn_HexToDec] (     @hexValue NVARCHAR(6) ) RETURNS DECIMAL AS BEGIN     -- Declare the return variable here DECLARE @decValue DECIMAL IF @hexValue LIKE '0x%' SET @hexValue = SUBSTRING(@hexValue,3,4) DECLARE @decTab TABLE ( decPos1 VARCHAR(2), decPos2 VARCHAR(2), decPos3 VARCHAR(2), decPos4 VARCHAR(2) ) DECLARE @pos1 VARCHAR(1) = SUBSTRING(@hexValue,1,1) DECLARE @pos2 VARCHAR(1) = SUBSTRING(@hexValue,2,1) DECLARE @pos3 VARCHAR(1) = SUBSTRING(@hexValue,3,1) DECLARE @pos4 VARCHAR(1) = SUBSTRING(@hexValue,4,1) INSERT @decTab VALUES (CASE               WHEN @pos1 = 'A' THEN '10'                 WHEN @pos1 = 'B' THEN '11'               WHEN @pos1 = 'C' THEN '12'               WHEN @pos1 = 'D' THEN '13'               WHEN @pos1 = 'E' THEN '14'               WHEN @pos1 = 'F' THEN '15'               ELSE @pos1              END, CASE               WHEN @pos2 = 'A' THEN '10'                 WHEN @pos2 = 'B' THEN '11'               WHEN @pos2 = 'C' THEN '12'               WHEN @pos2 = 'D' THEN '13'               WHEN @pos2 = 'E' THEN '14'               WHEN @pos2 = 'F' THEN '15'               ELSE @pos2              END, CASE               WHEN @pos3 = 'A' THEN '10'                 WHEN @pos3 = 'B' THEN '11'               WHEN @pos3 = 'C' THEN '12'               WHEN @pos3 = 'D' THEN '13'               WHEN @pos3 = 'E' THEN '14'               WHEN @pos3 = 'F' THEN '15'               ELSE @pos3              END, CASE               WHEN @pos4 = 'A' THEN '10'                 WHEN @pos4 = 'B' THEN '11'               WHEN @pos4 = 'C' THEN '12'               WHEN @pos4 = 'D' THEN '13'               WHEN @pos4 = 'E' THEN '14'               WHEN @pos4 = 'F' THEN '15'               ELSE @pos4              END) SET @decValue = (CONVERT(INT,(SELECT decPos4 FROM @decTab)))         +                 (CONVERT(INT,(SELECT decPos3 FROM @decTab))*16)      +                 (CONVERT(INT,(SELECT decPos2 FROM @decTab))*(16*16)) +                 (CONVERT(INT,(SELECT decPos1 FROM @decTab))*(16*16*16))     RETURN @decValue END GO     Making use of the function, I found the decimal conversion, added that number of days to 01/01/1950 and FINALLY arrived at my “unpacked relative date”.  Here is the query I used to retrieve the formatted date, and the result set which was returned: SELECT [packedDate] AS 'Hex Value',        dbo.ftn_HexToDec([packedDate]) AS 'Decimal Value',        CONVERT(DATE,DATEADD(day,dbo.ftn_HexToDec([packedDate]),'01/01/1950'),101) AS 'Relative String Date'   FROM [dbo].[Output Table]         This technique can be used any time you need to retrieve the hex value of a character string in SSIS.  The date example may be a bit difficult to understand at first, but with SSIS becoming the preferred tool for enterprise level integration for many companies, there is no doubt that developers will encounter these types of requirements with regularity in the future. Please feel free to contact me if you have any questions.

    Read the article

  • A little tidbit on Team Build 2010 and error MSB3147

    - by Enrique Lima
    The problem? Performing a build on a ClickOnce solution would not be successful due to the setup.bin not being located. Ok, now what? Researched from corner to corner, install, re-install, update.  Found some interesting posts to fix the issue, but most of them were focused on Team Foundation Server/Team Build 2008, and some other on 2005.  The other interesting tidbit was the frequent indication to modify the registry to help Team Build find the bootstrapper. Background info:  This was a migration I posted about a few days ago, a 32 bit TFS implementation to a full 64 bit TFS implementation.  Now, the project has binaries and dependencies on X86 (This piece of information became essential to moving from a failed build to a successful build). So, what’s the fix? The trick in this case was to go back into the Build Type and check the properties/configuration.  Upon further investigation, I found the following:  Once you Edit the Build Definition, then select Process, expand 3. Advanced and look for MSBuild Platform, switch from Auto to X86.  Ran the Build, and success!

    Read the article

  • Windows Embedded in Padua

    - by Valter Minute
    Martedì 8 Giugno, presso l’università di Padova terrò un intervento su Windows CE all’interno dell’evento: “Workshop sulle nuove architetture per sistemi embedded”. All’interno dello stesso evento altre sessioni tratteranno dell’architettura ARM e in particolare dei nuovi core Cortex A8, di applicazioni avanzate e di Linux Embedded. L’agenda (fitta e interessante) e il link per le iscrizioni potete trovarli qui: http://www.arrowitaly.it/training-events/training-events/dettaglio-training/article/workshop-sulle-nuove-architetture-per-sistemi-embedded/?tx_ttnews[backPid]=1336&cHash=ae34e269e1

    Read the article

  • SQL Azure Data Sync

    - by kaleidoscope
    The Microsoft Sync Framework Power Pack for SQL Azure contains a series of components that improve the experience of synchronizing with SQL Azure. This includes runtime components that optimize performance and simplify the process of synchronizing with the cloud. SQL Azure Data Sync allows developers and DBA's to: · Link existing on-premises data stores to SQL Azure. · Create new applications in Windows Azure without abandoning existing on-premises applications. · Extend on-premises data to remote offices, retail stores and mobile workers via the cloud. · Take Windows Azure and SQL Azure based web application offline to provide an “Outlook like” cached-mode experience. The Microsoft Sync Framework Power Pack for SQL Azure is comprised of the following: · SqlAzureSyncProvider · Sql Azure Offline Visual Studio Plug-In · SQL Azure Data Sync Tool for SQL Server · New SQL Azure Events Automated Provisioning Geeta

    Read the article

  • AJI Report #17 | Javier Lozano on Cloud Development and ASP.NET

    - by Jeff Julian
    Javier Lozano opens up the conversation with John and Jeff about the importance of web applications in the cloud and we walk through some options for enterprise developers to consume today. Javier has been an ASP.NET MVP and ASP.NET Insider for years and is a great resource in the Midwest when it comes to ASP.NET. Javier is one of organizers of the ASP.NET conference, aspConf. Listen to the Show Site: http://lozanotek.com Conference: aspConf Twitter: @jglozano

    Read the article

  • ASP.NET vNext Blog Post Series

    - by Soe Tun
    Originally posted on: http://geekswithblogs.net/stun/archive/2014/06/04/asp.net-vnext-blog-post-series.aspxASP.NET vNext Blog Post Series ASP.NET vNext was announced at TechEd 2014, and I have been playing around with it a bit. ASP.NET vNext is an exciting and revolutionary change for the Microsoft .NET development platform. ASP.NET vNext is now open-source, and available on Github at this location: https://github.com/aspnet/Home. I want to start a blog post series on the ASP.NET vNext, and share my experience as I learn more about it. Keeping it simple Each blog post in the series will be short and simple so I can write them in a short amount of time, and keep it focused on one (at most two) topic(s) per post. My goal is to make it easy to absorb the information as there are a ton of great new stuff to cover. Many other people in the community have blogged about the key new features of the ASP.NET vNext. I will link to those blog posts in my next blog post. MVC 6 POCO Controller Today, I want to start this blog post series with a teaser code snippet for those developers familiar with the ASP.NET MVC. Getting Started with ASP.NET MVC 6 article from ASP.NET website shows how to write a lightweight POCO (plain-old CLR object) MVC Controller class in the upcoming ASP.NET MVC 6. However, it doesn't show us how to use the IActionResultHelper interface to render a View. This is how I wrote my POCO MVC Controller based on the https://github.com/aspnet/Home/blob/master/samples/HelloMvc/Controllers/HomeController.cs sample from Github.   Note that this may not be the best way to write it, but this is good enough for now. using Microsoft.AspNet.Mvc; using Microsoft.AspNet.Mvc.ModelBinding; using MvcSample.Web.Models; namespace MvcSample.Web { public class HomeController { IActionResultHelper html; IModelMetadataProvider mmp; public HomeController(IActionResultHelper h, IModelMetadataProvider mmp) { this.html = h; this.mmp = mmp; } public IActionResult Index() { var viewData = new ViewDataDictionary<User>(mmp) { Model = User() }; return html.View("Index", viewData); } public User User() { return new User { Name = "My name", Address = "My address" }; } } } Please feel free to give me feedback as this will greatly help me organize the blog posts in this series, and plan head. Thanks for reading!

    Read the article

< Previous Page | 118 119 120 121 122 123 124 125 126 127 128 129  | Next Page >