Search Results

Search found 9729 results on 390 pages for 'timid developer'.

Page 153/390 | < Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >

  • MSDN Simulcast Event: Take Your Applications Sky-High with Cloud Computing and the Windows Azure Pla

    Join your local MSDN Events team as we take a deep dive into Microsoft Windows Azure. We'll start with a developer-focused overview of this brave new platform and the cloud computing services that can be used to build amazing applications. As the day unfolds, we'll explore data storage, Microsoft SQL Azure, and the basics of deployment with Windows Azure....Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • Oracle OpenWorld Preview: Oracle Social Network Technical Tour

    - by kellsey.ruppel
      Originally posted by Jake Kuramoto on The Apps Lab blog. Yesterday, I told you about the Oracle Social Network Developer Challenge we’ll be hosting at OpenWorld (@oracleopenworld) next week. If you’re attending OpenWorld or JavaOne (@javaoneconf) and want to get hands-on experience with Oracle Social Network and show off your coding chops, this is the event for you. Go ahead and register. I’ll wait. But wait, there’s more. If you’re not sure you’ll have the time for the Challenge, don’t want to embarrass anyone with your awesome skills, have a hectic schedule and can’t commit, or just want to learn more about Oracle Social Network and how to extend it, then the Oracle Social Network Technical Tour is for you. Read Jake's originally entry to learn more about The Tour!

    Read the article

  • html/css vs CMS

    - by Matt
    I am currently a CS student and an aspiring programmer/web developer. I am wondering whether it is worth taking the time to master html and css to make websites when these CMS services/wysiwyg editors (wordpress, squarespace) seem to be becoming more and more functional. Does anyone think these publishing services might eventually make the need to design websites from raw code unnecessary? If not, please explain why. If designing a website eventually becomes as simple as using Photoshop I would much rather invest my time in programming languages.

    Read the article

  • Simple Excel Export with EPPlus

    - by Jesse Taber
    Originally posted on: http://geekswithblogs.net/GruffCode/archive/2013/10/30/simple-excel-export-with-epplus.aspxAnyone I’ve ever met who works with an application that sits in front of a lot of data loves it when they can get that data exported to an Excel file for them to mess around with offline. As both developer and end user of a little website project that I’ve been working on, I found myself wanting to be able to get a bunch of the data that the application was collecting into an Excel file. The great thing about being both an end user and a developer on a project is that you can build the features that you really want! While putting this feature together I came across the fantastic EPPlus library. This library is certainly very well known and popular, but I was so impressed with it that I thought it was worth a quick blog post. This library is extremely powerful; it lets you create and manipulate Excel 2007/2010 spreadsheets in .NET code with a high degree of flexibility. My only gripe with the project is that they are not touting how insanely easy it is to build a basic Excel workbook from a simple data source. If I were running this project the approach I’m about to demonstrate in this post would be front and center on the landing page for the project because it shows how easy it really is to get started and serves as a good way to ease yourself in to some of the more advanced features. The website in question uses RavenDB, which means that we’re dealing with POCOs to model the data throughout all layers of the application. I love working like this so when it came time to figure out how to export some of this data to an Excel spreadsheet I wanted to find a way to take an IEnumerable<T> and just have it dumped to Excel with each item in the collection being modeled as a single row in the Excel worksheet. Consider the following class: public class Employee { public int Id { get; set; } public string Name { get; set; } public decimal HourlyRate { get; set; } public DateTime HireDate { get; set; } } Now let’s say we have a collection of these represented as an IEnumerable<Employee> and we want to be able to output it to an Excel file for offline querying/manipulation. As it turns out, this is dead simple to do with EPPlus. Have a look: public void ExportToExcel(IEnumerable<Employee> employees, FileInfo targetFile) { using (var excelFile = new ExcelPackage(targetFile)) { var worksheet = excelFile.Workbook.Worksheets.Add("Sheet1"); worksheet.Cells["A1"].LoadFromCollection(Collection: employees, PrintHeaders: true); excelFile.Save(); } } That’s it. Let’s break down what’s going on here: Create a ExcelPackage to model the workbook (Excel file). Note that the ‘targetFile’ value here is a FileInfo object representing the location on disk where I want the file to be saved. Create a worksheet within the workbook. Get a reference to the top-leftmost cell (addressed as A1) and invoke the ‘LoadFromCollection’ method, passing it our collection of Employee objects. Behind the scenes this is reflecting over the properties of the type provided and pulling out any public members to become columns in the resulting Excel output. The ‘PrintHeaders’ parameter tells EPPlus to grab the name of the property and put it in the first row. Save the Excel file All of the heavy lifting here is being done by the ‘LoadFromCollection’ method, and that’s a good thing. Now, this was really easy to do, but it has some limitations. Using this approach you get a very plain, un-styled Excel worksheet. The column widths are all set to the default. The number format for all cells is ‘General’ (which proves particularly interesting if you have a DateTime property in your data source). I’m a “no frills” guy, so I wasn’t bothered at all by trading off simplicity for style and formatting. That said, EPPlus has tons of samples that you can download that illustrate how to apply styles and formatting to cells and a ton of other advanced features that are way beyond the scope of this post.

    Read the article

  • How can you learn to design nice looking websites?

    - by Richard
    I am a moderately capable web developer. I can put stuff where I want it to go and put some JQuery stuff in there if I need to. However, if I am making my own website (which I am starting to do) I have no idea how to design it. If someone was to sit next to me a point to the screen and say "put this picture there, text there" I can do that quite easily. But designing my own site with my choice of colours and text will look like a toddler has invented it. Does anyone know any websites/books I can look at or has anyone got any tips on the basics of non-toddler web design?

    Read the article

  • GitHub Integration in Windows Azure Web Site

    - by Shaun
    Microsoft had just announced an update for Windows Azure Web Site (a.k.a. WAWS). There are four major features added in WAWS which are free scaling mode, GitHub integration, custom domain and multi branches. Since I ‘m working in Node.js and I would like to have my code in GitHub and deployed automatically to my Windows Azure Web Site once I sync my code, this feature is a big good news to me.   It’s very simple to establish the GitHub integration in WAWS. First we need a clean WAWS. In its dashboard page click “Set up Git publishing”. Currently WAWS doesn’t support to change the publish setting. So if you have an existing WAWS which published by TFS or local Git then you have to create a new WAWS and set the Git publishing. Then in the deployment page we can see now WAWS supports three Git publishing modes: - Push my local files to Windows Azure: In this mode we will create a new Git repository on local machine and commit, publish our code to Windows Azure through Git command or some GUI. - Deploy from my GitHub project: In this mode we will have a Git repository created on GitHub. Once we publish our code to GitHub Windows Azure will download the code and trigger a new deployment. - Deploy from my CodePlex project: Similar as the previous one but our code would be in CodePlex repository.   Now let’s back to GitHub and create a new publish repository. Currently WAWS GitHub integration only support for public repositories. The private repositories support will be available in several weeks. We can manage our repositories in GitHub website. But as a windows geek I prefer the GUI tool. So I opened the GitHub for Windows, login with my GitHub account and select the “github” category, click the “add” button to create a new repository on GitHub. You can download the GitHub for Windows here. I specified the repository name, description, local repository, do not check the “Keep this code private”. After few seconds it will create a new repository on GitHub and associate it to my local machine in that folder. We can find this new repository in GitHub website. And in GitHub for Windows we can also find the local repository by selecting the “local” category.   Next, we need to associate this repository with our WAWS. Back to windows developer portal, open the “Deploy from my GitHub project” in the deployment page and click the “Authorize Windows Azure” link. It will bring up a new windows on GitHub which let me allow the Windows Azure application can access your repositories. After we clicked “Allow”, windows azure will retrieve all my GitHub public repositories and let me select which one I want to integrate to this WAWS. I selected the one I had just created in GitHub for Windows. So that’s all. We had completed the GitHub integration configuration. Now let’s have a try. In GitHub for Windows, right click on this local repository and click “open in explorer”. Then I added a simple HTML file. 1: <html> 2: <head> 3: </head> 4: <body> 5: <h1> 6: I came from GitHub, WOW! 7: </h1> 8: </body> 9: </html> Save it and back to GitHub for Windows, commit this change and publish. This will upload our changes to GitHub, and Windows Azure will detect this update and trigger a new deployment. If we went back to azure developer portal we can find the new deployment. And our commit message will be shown as the deployment description as well. And here is the page deployed to WAWS.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • Devfish Joe Healy in Fort Lauderdale - Cloud Computing and Azure - 03/11/2010 MSDN Tiki Hut

    - by Rainer
    Devfish Joe Healy, Brian Hitney, and Herve Rogero presented excellent sessions on today's MSDN Tiki Hut Event about  Cloud Computing and Azure. This was an developer focused event, starting out with an overview about structure and platform, followed by working code samples running on the platform, and all needed information to get developers started on development for cloud applications. Participants had Q&A opportunities after each session and made good use of it. I am sure that a lot of developers will jump on the Azure train. Azure is on top of my dev project list after that great event! This platform offers endless opportunities for development and businesses. The cloud environment in general is safer, scales better, and is far more cost effective compared to run and maintain your own data center. Posted: Rainer Habermann

    Read the article

  • The OTN Garage Blog Week in Review

    - by Rick Ramsey
    In case you missed the last few blogs on the OTN Garage (because somebody neglected to cross-post them here), here they are: What Day Is It and Why Am I Wearing a Little Furry Skirt? - Oracle VM Templates, Oracle Linux, Wim Coekaerts, and jet lag. A Real Cutting Edge - Oracle Sun blade systems architecture, Blade Clusters, and best practices. Which Version of Solaris Were You Running When ... - Oracle Solaris Legacy Containers and the Voyager 1 Content Cluster: Understanding the Local Boot Option in the Automatic Installer of Oracle Solaris 11 Express - Resources to help you understand this cool option Rick - System Admin and Developer Community of the Oracle Technology Network

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you’ll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you’ve read my previous blog posts, you’ll be aware that I’ve been focusing on the database continuous integration theme. In my CI setup I create a “production”-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it’s not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn’t I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn’t an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley’s “Continuous Delivery” teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you’ve been allotted. 2. It’s not just about the storage requirements, it’s also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I’m just not going to get the feedback quickly enough to react. So what’s the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I’m sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server’s point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no ‘duplicate’ storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly “release test” process triggered by my CI tool. RESTORE DATABASE WidgetProduction_Virtual FROM DISK=N'D:\VirtualDatabase\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE WidgetProduction_Virtual WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the ‘virtual’ restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • How to restore your production database without needing additional storage

    - by David Atkinson
    Production databases can get very large. This in itself is to be expected, but when a copy of the database is needed the database must be restored, requiring additional and costly storage.  For example, if you want to give each developer a full copy of your production server, you'll need n times the storage cost for your n-developer team. The same is true for any test databases that are created during the course of your project lifecycle. If you've read my previous blog posts, you'll be aware that I've been focusing on the database continuous integration theme. In my CI setup I create a "production"-equivalent database directly from its source control representation, and use this to test my upgrade scripts. Despite this being a perfectly valid and practical thing to do as part of a CI setup, it's not the exact equivalent to running the upgrade script on a copy of the actual production database. So why shouldn't I instead simply restore the most recent production backup as part of my CI process? There are two reasons why this would be impractical. 1. My CI environment isn't an exact copy of my production environment. Indeed, this would be the case in a perfect world, and it is strongly recommended as a good practice if you follow Jez Humble and David Farley's "Continuous Delivery" teachings, but in practical terms this might not always be possible, especially where storage is concerned. It may just not be possible to restore a huge production database on the environment you've been allotted. 2. It's not just about the storage requirements, it's also the time it takes to do the restore. The whole point of continuous integration is that you are alerted as early as possible whether the build (yes, the database upgrade script counts!) is broken. If I have to run an hour-long restore each time I commit a change to source control I'm just not going to get the feedback quickly enough to react. So what's the solution? Red Gate has a technology, SQL Virtual Restore, that is able to restore a database without using up additional storage. Although this sounds too good to be true, the explanation is quite simple (although I'm sure the technical implementation details under the hood are quite complex!) Instead of restoring the backup in the conventional sense, SQL Virtual Restore will effectively mount the backup using its HyperBac technology. It creates a data and log file, .vmdf, and .vldf, that becomes the delta between the .bak file and the virtual database. This means that both read and write operations are permitted on a virtual database as from SQL Server's point of view it is no different from a conventional database. Instead of doubling the storage requirements upon a restore, there is no 'duplicate' storage requirements, other than the trivially small virtual log and data files (see illustration below). The benefit is magnified the more databases you mount to the same backup file. This technique could be used to provide a large development team a full development instance of a large production database. It is also incredibly easy to set up. Once SQL Virtual Restore is installed, you simply run a conventional RESTORE command to create the virtual database. This is what I have running as part of a nightly "release test" process triggered by my CI tool. RESTORE DATABASE WidgetProduction_virtual FROM DISK=N'C:\WidgetWF\ProdBackup\WidgetProduction.bak' WITH MOVE N'WidgetProduction' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_WidgetProduction_Virtual.vmdf', MOVE N'WidgetProduction_log' TO N'C:\WidgetWF\ProdBackup\WidgetProduction_log_WidgetProduction_Virtual.vldf', NORECOVERY, STATS=1, REPLACE GO RESTORE DATABASE mydatabase WITH RECOVERY   Note the only change from what you would do normally is the naming of the .vmdf and .vldf files. SQL Virtual Restore intercepts this by monitoring the extension and applies its magic, ensuring the 'virtual' restore happens rather than the conventional storage-heavy restore. My automated release test then applies the upgrade scripts to the virtual production database and runs some validation tests, giving me confidence that were I to run this on production for real, all would go smoothly. For illustration, here is my 8Gb production database: And its corresponding backup file: Here are the .vldf and .vmdf files, which represent the only additional used storage for the new database following the virtual restore.   The beauty of this product is its simplicity. Once it is installed, the interaction with the backup and virtual database is exactly the same as before, as the clever stuff is being done at a lower level. SQL Virtual Restore can be downloaded as a fully functional 14-day trial. Technorati Tags: SQL Server

    Read the article

  • Can my ikmnet test results say something about career choice I should take?

    - by Nicke
    I took 2 tests via ikmnet and scored 70 % on SQL and 65 % on Java. While not bad, it can be improved. The subskills I need to improve according to the test are interfaces and inheritance, compilation and deployment, flow control, The java.lang package and "Java Program Construction" and these topics seems rather broad to me. Rather than just learning by programming, could you advice me to take a certification, follow a course or otherwise improve my skills? By the way, I enjoy python more than Java so should I market myself more of a python programmer or even a role that some companies search for which seems like a system developer with more technical writing where the title is system analysts (evaluating systems in cooperation with management rather than programming.) Thank you for any comment and/or answer.

    Read the article

  • R2 and Idera Idera SQL Safe (Freeware Edition)

    - by DavidWimbush
    Good news: the Freeware edition of Idera SQL Safe works on R2. You might not care but I certainly do. Here's why:  In September last year I started using Idera SQL Safe (the Freeware Edition) to get backup compression on my SQL 2005 servers. It seemed like a good idea at the time - it was free and my backups ran much faster and took up much less disk space. I really thought I'd actually scored a free lunch. Until they discontinued the product. I was thinking about what to do when I heard that R2 Standard would include native backup compression so I've just been keeping my fingers crossed since then. So I installed R2 Developer on my laptop, installed SQL Safe and kicked off a restore with it. No problem. Phew! Now I won't have to do a special, non-compressed backup and restore when we migrate.

    Read the article

  • &ldquo;Napa&rdquo; Development Tools for SharePoint 2013 and Office 2013

    - by Sahil Malik
    SharePoint 2010 Training: more information One of the biggest issues in getting started with SharePoint development are the 2091097 steps you need to go through, and the heavy duty machine you need to invest in, to create a development environment for a SharePoint and Office developer. This is not unlike the fact that creating and running a production SharePoint farm can be extremely time-consuming. In my latest code-magazine article, I describe how you can use the “Napa” Development Tools for SharePoint 2013 and Office 2013. These are also described in my latest book, “SharePoint 2013 - Planet of the Apps”, which is now available on Lulu.com Read full article ....

    Read the article

  • Succesful Hosted TFS Event at VISUG by Hassan Fadili at Microsoft Belgium

    - by hassanfadili
    On Tuesday November 22th, VISUG User Group has hosted an event at Microsoft Belgium about Hosted TFS by Hassan Fadili see http://www.visug.be/Eventdetails/tabid/95/EventId/48/Default.aspx. This event was very interactive and many as 60 people have taken part. The topic was about Build, Relase and Deploy with TFS2011 and MS Deploy. A combination of Slides and Demo's was perfect to explain this common mechanism for developers.To learn more about this topic check the earlier article pubished by Hassan Fadili for Software Developer Network Community at: http://www.sdn.nl/SDN/Artikelen/tabid/58/view/View/ArticleID/3199/Build-Release-and-Deploy-BRD-using-TFS2010-MS-Web-Deploy-and-WIX3X.aspxIf you have questions/Suggestions or thoughts about this topic, feel free to contact me by E-mail: [email protected] and/or via Twitter: @HassanFad

    Read the article

  • Sending Big Files with WCF

    - by Sean Feldman
    I had to look into a project that submits large files to WCF service. Implementation is based on data chunking. This is a good approach when your client and server are not both based on WCF, bud different technologies. The problem with something like this is that chunking (either you wish it or not) complicates the overall solution. Alternative would be streaming. In WCF to WCF scenario, this is a piece of cake. When client is Java, it becomes a bit more challenging (has anyone implemented Java client streaming data to WCF service?). What I really liked about .NET implementation with WCF, is that sending header info along with stream was dead simple, and from the developer point of view looked like it’s all a part of the DTO passed into the service. [ServiceContract] public interface IFileUpload { [OperationContract] void UploadFile(SendFileMessage message); } Where SendFileMessage is [MessageContract] public class SendFileMessage { [MessageBodyMember(Order = 1)] public Stream FileData; [MessageHeader(MustUnderstand = true)] public FileTransferInfo FileTransferInfo; }

    Read the article

  • Is OpenGL after C++ job oriented?

    - by Ani
    First, my regards to all programmers out there who have put their endless efforts in learning and becoming expert, wise, efficient and best. Let me describe my situation. I have just graduated from Electronics and Communication Stream. Though I have more interest in software development and hence I have opted to become a software developer rather than Electronics engg. I have learned C++ and wish to wish to go more deep. I have started to learn OpenGL. Guide me in the following: Is OpenGL good to learn and is it job oriented? Should I learn some other language rather then OpenGL after C++?

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    This is the twentieth in a series of blog posts Im doing on the upcoming VS 2010 and .NET 4 release.  Todays blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  Youll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • JavaScript Intellisense Improvements with VS 2010

    - by ScottGu
    This is the twentieth in a series of blog posts I’m doing on the upcoming VS 2010 and .NET 4 release.  Today’s blog post covers some of the nice improvements coming with JavaScript intellisense with VS 2010 and the free Visual Web Developer 2010 Express.  You’ll find with VS 2010 that JavaScript Intellisense loads much faster for large script files and with large libraries, and that it now provides statement completion support for more advanced scenarios compared to previous versions of Visual Studio. [In addition to blogging, I am also now using Twitter for quick updates and to share links. Follow me at: twitter.com/scottgu] Improved JavaScript Intellisense Providing Intellisense for a dynamic language like JavaScript is more involved than doing so with a statically typed language like VB or C#.  Correctly inferring the shape and structure of variables, methods, etc is pretty much impossible without pseudo-executing the actual code itself – since JavaScript as a language is flexible enough to dynamically modify and morph these things at runtime.  VS 2010’s JavaScript code editor now has the smarts to perform this type of pseudo-code execution as you type – which is how its intellisense completion is kept accurate and complete.  Below is a simple walkthrough that shows off how rich and flexible it is with the final release. Scenario 1: Basic Type Inference When you declare a variable in JavaScript you do not have to declare its type.  Instead, the type of the variable is based on the value assigned to it.  Because VS 2010 pseudo-executes the code within the editor, it can dynamically infer the type of a variable, and provide the appropriate code intellisense based on the value assigned to a variable. For example, notice below how VS 2010 provides statement completion for a string (because we assigned a string to the “foo” variable): If we later assign a numeric value to “foo” the statement completion (after this assignment) automatically changes to provide intellisense for a number: Scenario 2: Intellisense When Manipulating Browser Objects It is pretty common with JavaScript to manipulate the DOM of a page, as well as work against browser objects available on the client.  Previous versions of Visual Studio would provide JavaScript statement completion against the standard browser objects – but didn’t provide much help with more advanced scenarios (like creating dynamic variables and methods).  VS 2010’s pseudo-execution of code within the editor now allows us to provide rich intellisense for a much broader set of scenarios. For example, below we are using the browser’s window object to create a global variable named “bar”.  Notice how we can now get intellisense (with correct type inference for a string) with VS 2010 when we later try and use it: When we assign the “bar” variable as a number (instead of as a string) the VS 2010 intellisense engine correctly infers its type and modifies statement completion appropriately to be that of a number instead: Scenario 3: Showing Off Because VS 2010 is psudo-executing code within the editor, it is able to handle a bunch of scenarios (both practical and wacky) that you throw at it – and is still able to provide accurate type inference and intellisense. For example, below we are using a for-loop and the browser’s window object to dynamically create and name multiple dynamic variables (bar1, bar2, bar3…bar9).  Notice how the editor’s intellisense engine identifies and provides statement completion for them: Because variables added via the browser’s window object are also global variables – they also now show up in the global variable intellisense drop-down as well: Better yet – type inference is still fully supported.  So if we assign a string to a dynamically named variable we will get type inference for a string.  If we assign a number we’ll get type inference for a number.  Just for fun (and to show off!) we could adjust our for-loop to assign a string for even numbered variables (bar2, bar4, bar6, etc) and assign a number for odd numbered variables (bar1, bar3, bar5, etc): Notice above how we get statement completion for a string for the “bar2” variable.  Notice below how for “bar1” we get statement completion for a number:   This isn’t just a cool pet trick While the above example is a bit contrived, the approach of dynamically creating variables, methods and event handlers on the fly is pretty common with many Javascript libraries.  Many of the more popular libraries use these techniques to keep the size of script library downloads as small as possible.  VS 2010’s support for parsing and pseudo-executing libraries that use these techniques ensures that you get better code Intellisense out of the box when programming against them. Summary Visual Studio 2010 (and the free Visual Web Developer 2010 Express) now provide much richer JavaScript intellisense support.  This support works with pretty much all popular JavaScript libraries.  It should help provide a much better development experience when coding client-side JavaScript and enabling AJAX scenarios within your ASP.NET applications. Hope this helps, Scott P.S. You can read my previous blog post on VS 2008’s JavaScript Intellisense to learn more about our previous JavaScript intellisense (and some of the scenarios it supported).  VS 2010 obviously supports all of the scenarios previously enabled with VS 2008.

    Read the article

  • Any Good Cocos2d Pause Menu Library

    - by Mahbubur R Aaman
    Background : From http://code.google.com/p/cocos2d-iphone/issues/detail?id=173 Scenes/Nodes doesn't support the CocosNodeOpacity protocol. From http://playsnackgames.com/blog/2011/09/cocos2d-tutorial-creating-a-reusable-pause-layer/ Cocos2d offers a simple method to pause and resume itself, but these methods stop the CCDirector (the class that manages most aspects of a Cocos2d’s app lifetime) from running actions and lower the fps to 5 to conserve battery life. Related issues http://www.cocos2d-iphone.org/forum/topic/4368 http://www.cocos2d-iphone.org/forum/topic/151 http://stackoverflow.com/questions/5852354/cocos2d-engine-pause-resume http://stackoverflow.com/questions/11878450/how-to-pause-a-layer-in-cocos2d-2-0 Question : Is there any Good Cocos2d Pause Menu Library solving these tricky issues? This will save many hours of Game Developer's life.

    Read the article

  • Windows Phone Resources from //BUILD 2013 Conference by Lee Stott

    - by Nikita Polyakov
    Originally posted on: http://geekswithblogs.net/campuskoder/archive/2013/07/02/153320.aspxLee Stott has a great summary blog post with all of the videos from the //BUILD 2013 conference that just happened last week. It’s nice because filtering to this event and finding Windows Phone sessions on Channel9 is not the best and this is a great snap shot of all of the sessions you can view from the conference in one page. Also shows that Microsoft although focused on Windows 8.1 at this event, still had a sizable presence of Windows Phone Developer topics at this event. Read the full blog post here: http://blogs.msdn.com/b/uk_faculty_connection/archive/2013/07/01/build-2013-windows-phone-resources.aspx

    Read the article

  • Screencasts introducing C++ AMP

    - by Daniel Moth
    It has been almost 2.5 years since I last recorded a screencast, and I had forgotten how time consuming they are to plan/record/edit/produce/publish, but at the same time so much fun to see the end result! So below are links to 4 screencasts to teach you C++ AMP basics from scratch (even if you class yourself as a .NET developer you'll be able to follow). Setup code - part 1 array_view, extent, index - part 2 parallel_for_each - part 3 accelerator - part 4 If you have comments/questions about what is shown in each video, please leave them at each video recoding. If you have generic questions about C++ AMP, please ask in the C++ AMP MSDN forum. Comments about this post by Daniel Moth welcome at the original blog.

    Read the article

  • Should we rename overloaded methods?

    - by Mik378
    Assume an interface containing these methods : Car find(long id); List<Car> find(String model); Is it better to rename them like this? Car findById(long id); List findByModel(String model); Indeed, any developer who use this API won't need to look at the interface for knowing possible arguments of initial find() methods. So my question is more general : What is the benefit of using overloaded methods in code since it reduce readability?

    Read the article

  • Why do browsers leak memory?

    - by Dane Balia
    A colleague and I were speaking about browsers (using a browser control object in a project), and it appears as plain as day that all browsers (Firefox, Chrome, IE, Opera) display the same characteristic or side-effect from their usage and that being 'Leaking Memory'. Can someone explain why that is the case? Surely as with any form of code, there should be proper garbage collection? PS. I've read about some defensive patterns on why this can happen from a developer's perspective. I am aware of an article Crockford wrote on IE; but why is the problem symptomatic of every browser? Thanks

    Read the article

  • Big Data – Evolution of Big Data – Day 3 of 21

    - by Pinal Dave
    In yesterday’s blog post we answered what is the Big Data. Today we will understand why and how the evolution of Big Data has happened. Though the answer is very simple, I would like to tell it in the form of a history lesson. Data in Flat File In earlier days data was stored in the flat file and there was no structure in the flat file.  If any data has to be retrieved from the flat file it was a project by itself. There was no possibility of retrieving the data efficiently and data integrity has been just a term discussed without any modeling or structure around. Database residing in the flat file had more issues than we would like to discuss in today’s world. It was more like a nightmare when there was any data processing involved in the application. Though, applications developed at that time were also not that advanced the need of the data was always there and there was always need of proper data management. Edgar F Codd and 12 Rules Edgar Frank Codd was a British computer scientist who, while working for IBM, invented the relational model for database management, the theoretical basis for relational databases. He presented 12 rules for the Relational Database and suddenly the chaotic world of the database seems to see discipline in the rules. Relational Database was a promising land for all the unstructured database users. Relational Database brought into the relationship between data as well improved the performance of the data retrieval. Database world had immediately seen a major transformation and every single vendors and database users suddenly started to adopt the relational database models. Relational Database Management Systems Since Edgar F Codd proposed 12 rules for the RBDMS there were many different vendors who started them to build applications and tools to support the relationship between database. This was indeed a learning curve for many of the developer who had never worked before with the modeling of the database. However, as time passed by pretty much everybody accepted the relationship of the database and started to evolve product which performs its best with the boundaries of the RDBMS concepts. This was the best era for the databases and it gave the world extreme experts as well as some of the best products. The Entity Relationship model was also evolved at the same time. In software engineering, an Entity–relationship model (ER model) is a data model for describing a database in an abstract way. Enormous Data Growth Well, everything was going fine with the RDBMS in the database world. As there were no major challenges the adoption of the RDBMS applications and tools was pretty much universal. There was a race at times to make the developer’s life much easier with the RDBMS management tools. Due to the extreme popularity and easy to use system pretty much every data was stored in the RDBMS system. New age applications were built and social media took the world by the storm. Every organizations was feeling pressure to provide the best experience for their users based the data they had with them. While this was all going on at the same time data was growing pretty much every organization and application. Data Warehousing The enormous data growth now presented a big challenge for the organizations who wanted to build intelligent systems based on the data and provide near real time superior user experience to their customers. Various organizations immediately start building data warehousing solutions where the data was stored and processed. The trend of the business intelligence becomes the need of everyday. Data was received from the transaction system and overnight was processed to build intelligent reports from it. Though this is a great solution it has its own set of challenges. The relational database model and data warehousing concepts are all built with keeping traditional relational database modeling in the mind and it still has many challenges when unstructured data was present. Interesting Challenge Every organization had expertise to manage structured data but the world had already changed to unstructured data. There was intelligence in the videos, photos, SMS, text, social media messages and various other data sources. All of these needed to now bring to a single platform and build a uniform system which does what businesses need. The way we do business has also been changed. There was a time when user only got the features what technology supported, however, now users ask for the feature and technology is built to support the same. The need of the real time intelligence from the fast paced data flow is now becoming a necessity. Large amount (Volume) of difference (Variety) of high speed data (Velocity) is the properties of the data. The traditional database system has limits to resolve the challenges this new kind of the data presents. Hence the need of the Big Data Science. We need innovation in how we handle and manage data. We need creative ways to capture data and present to users. Big Data is Reality! Tomorrow In tomorrow’s blog post we will try to answer discuss Basics of Big Data Architecture. Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Big Data, PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • References about Game Engine Architecture in AAA Games

    - by sharethis
    Last weeks I focused on game engine architecture and learned a lot about different approaches like component based, data driven, and so on. I used them in test applications and understand their intention but none of them looks like the holy grail. So I wonder how major games in the industry ("AAA Games") solve different architecture problems. But I noticed that there are barely references about game engine architecture out there. Do you know any resources of game engine architecture of major game titles like Battlefield, Call of Duty, Crysis, Skyrim, and so on? Doesn't matter if it is an article of a game developer or a wiki page or an entire book. I read this related popular question: Good resources for learning about game architecture? But it is focused on learning books rather than approaches in the industry. Hopefully the breadth of our community can carry together certain useful informations! Thanks a lot! Edit: This question is focused but not restricted to first person games.

    Read the article

< Previous Page | 149 150 151 152 153 154 155 156 157 158 159 160  | Next Page >