Search Results

Search found 35094 results on 1404 pages for 'post build'.

Page 186/1404 | < Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >

  • Notes from a short presentation on NodeJs

    - by Aligned
    Originally posted on: http://geekswithblogs.net/Aligned/archive/2014/05/30/notes-from-a-short-presentation-on-nodejs.aspxI volunteered myself to give a short 30 minute presentation at a work lunch and learn on NodeJs. With my limited experience I see using Node as a great tool for build process improvement, scaffolding with yeoman, and running tests with Karma. I haven’t looked into using as a full server or development stack. I guess I’m too stuck on IIS and Visual Studio :-). Here are my notes, that aren’t very well formatted, but I wanted to share it anyways. What is it? "Node.js is a platform built on Chrome's JavaScript runtime for easily building fast, scalable network applications. Node.js uses an event-driven, non-blocking I/O model that makes it lightweight and efficient, perfect for data-intensive real-time applications that run across distributed devices." Why should you be interested? another popular tool that can help you get the job done you can use the command prompt! can be run at build or release time to automate tasks What are some uses? https://www.npmjs.org/ - NuGet for Node packages http://bower.io/ - NuGet for UI JavaScript libraries (jQuery, Bootstrap, Angular, etc) http://yeoman.io/ "Our workflow is comprised of three tools for improving your productivity and satisfaction when building a web app: yo (the scaffolding tool), grunt (the build tool) and bower (for package management)." -> yeoman asks which components you want alternative - http://joakimbeng.eu01.aws.af.cm/slush-replacing-yeoman-with-gulp/ https://www.npmjs.org/package/generator-cg-angular - phantom js, less, // git is needed for bower http://git-scm.com/ run installer in Windows before you can use bower // select Run Git from the Windows Command Prompt in the installer // requires a reboot http://stackoverflow.com/questions/20069297/bower-git-not-in-the-path-error npm install -g git npm install -g yo npm install -g generator-cg-angular mkdir myapp cd myapp yo cg-angular npm install -g bower npm install -g grunt-cli yo bower grunt serve grunt test grunt build // there are many generators (generator-angular) is another one // I like the Nuget HotTowel-Angular from John Papa myself // needed IIS Node for Express -> prompt from WebMatrix Karma bat to startup Karma - see below image compression - https://www.npmjs.org/search?q=optimize+images, https://github.com/heldr/node-smushit - do it from the command line LESS compiling js and css combine and minification at build with Gulp for requireJS apps quick lightweight HTTP server - "Express" Build pipeline with Grunt or Gulp http://www.johnpapa.net/gulp-and-grunt-at-anglebrackets/ Gulp is the newer and improved over Grunt. Supposed to be easier to use, but Grunt is more established. https://github.com/johnpapa/ng-demos/tree/master/grunt-gulp https://github.com/assetgraph/assetgraph-builder Does a lot of the minimizing, combining, image optimization etc using Node. Looks interesting.... http://nodejs.org http://nodeschool.io/ http://sub.watchmecode.net/getting-started-with-nodejs-installing-and-writing-your-first-code/ https://stormpath.com/blog/build-a-killer-node-dot-js-client-for-your-rest-plus-json-api/ https://codio.com/ http://www.hanselman.com/blog/ItsJustASoftwareIssueEdgejsBringsNodeAndNETTogetherOnThreePlatforms.aspx run unit tests - Karma in msBuild karma-start.bat @echo off cd %~dp0\.. REM 604800 is to make sure we only update once every 7 days call npm install --cache-min 604800 -g grunt-cli call npm install --cache-min 604800 call npm install --cache-min 604800 -g karma-cli karma start UnitTests\karma.conf.js REM karma start UnitTests\karma.conf.js --single-run REM see karma-start.bat and karam.config.js REM jsHint comes from Nuget

    Read the article

  • Solving Big Problems with Oracle R Enterprise, Part II

    - by dbayard
    Part II – Solving Big Problems with Oracle R Enterprise In the first post in this series (see https://blogs.oracle.com/R/entry/solving_big_problems_with_oracle), we showed how you can use R to perform historical rate of return calculations against investment data sourced from a spreadsheet.  We demonstrated the calculations against sample data for a small set of accounts.  While this worked fine, in the real-world the problem is much bigger because the amount of data is much bigger.  So much bigger that our approach in the previous post won’t scale to meet the real-world needs. From our previous post, here are the challenges we need to conquer: The actual data that needs to be used lives in a database, not in a spreadsheet The actual data is much, much bigger- too big to fit into the normal R memory space and too big to want to move across the network The overall process needs to run fast- much faster than a single processor The actual data needs to be kept secured- another reason to not want to move it from the database and across the network And the process of calculating the IRR needs to be integrated together with other database ETL activities, so that IRR’s can be calculated as part of the data warehouse refresh processes In this post, we will show how we moved from sample data environment to working with full-scale data.  This post is based on actual work we did for a financial services customer during a recent proof-of-concept. Getting started with the Database At this point, we have some sample data and our IRR function.  We were at a similar point in our customer proof-of-concept exercise- we had sample data but we did not have the full customer data yet.  So our database was empty.  But, this was easily rectified by leveraging the transparency features of Oracle R Enterprise (see https://blogs.oracle.com/R/entry/analyzing_big_data_using_the).  The following code shows how we took our sample data SimpleMWRRData and easily turned it into a new Oracle database table called IRR_DATA via ore.create().  The code also shows how we can access the database table IRR_DATA as if it was a normal R data.frame named IRR_DATA. If we go to sql*plus, we can also check out our new IRR_DATA table: At this point, we now have our sample data loaded in the database as a normal Oracle table called IRR_DATA.  So, we now proceeded to test our R function working with database data. As our first test, we retrieved the data from a single account from the IRR_DATA table, pull it into local R memory, then call our IRR function.  This worked.  No SQL coding required! Going from Crawling to Walking Now that we have shown using our R code with database-resident data for a single account, we wanted to experiment with doing this for multiple accounts.  In other words, we wanted to implement the split-apply-combine technique we discussed in our first post in this series.  Fortunately, Oracle R Enterprise provides a very scalable way to do this with a function called ore.groupApply().  You can read more about ore.groupApply() here: https://blogs.oracle.com/R/entry/analyzing_big_data_using_the1 Here is an example of how we ask ORE to take our IRR_DATA table in the database, split it by the ACCOUNT column, apply a function that calls our SimpleMWRR() calculation, and then combine the results. (If you are following along at home, be sure to have installed our myIRR package on your database server via  “R CMD INSTALL myIRR”). The interesting thing about ore.groupApply is that the calculation is not actually performed in my desktop R environment from which I am running.  What actually happens is that ore.groupApply uses the Oracle database to perform the work.  And the Oracle database is what actually splits the IRR_DATA table by ACCOUNT.  Then the Oracle database takes the data for each account and sends it to an embedded R engine running on the database server to apply our R function.  Then the Oracle database combines all the individual results from the calls to the R function. This is significant because now the embedded R engine only needs to deal with the data for a single account at a time.  Regardless of whether we have 20 accounts or 1 million accounts or more, the R engine that performs the calculation does not care.  Given that normal R has a finite amount of memory to hold data, the ore.groupApply approach overcomes the R memory scalability problem since we only need to fit the data from a single account in R memory (not all of the data for all of the accounts). Additionally, the IRR_DATA does not need to be sent from the database to my desktop R program.  Even though I am invoking ore.groupApply from my desktop R program, because the actual SimpleMWRR calculation is run by the embedded R engine on the database server, the IRR_DATA does not need to leave the database server- this is both a performance benefit because network transmission of large amounts of data take time and a security benefit because it is harder to protect private data once you start shipping around your intranet. Another benefit, which we will discuss in a few paragraphs, is the ability to leverage Oracle database parallelism to run these calculations for dozens of accounts at once. From Walking to Running ore.groupApply is rather nice, but it still has the drawback that I run this from a desktop R instance.  This is not ideal for integrating into typical operational processes like nightly data warehouse refreshes or monthly statement generation.  But, this is not an issue for ORE.  Oracle R Enterprise lets us run this from the database using regular SQL, which is easily integrated into standard operations.  That is extremely exciting and the way we actually did these calculations in the customer proof. As part of Oracle R Enterprise, it provides a SQL equivalent to ore.groupApply which it refers to as “rqGroupEval”.  To use rqGroupEval via SQL, there is a bit of simple setup needed.  Basically, the Oracle Database needs to know the structure of the input table and the grouping column, which we are able to define using the database’s pipeline table function mechanisms. Here is the setup script: At this point, our initial setup of rqGroupEval is done for the IRR_DATA table.  The next step is to define our R function to the database.  We do that via a call to ORE’s rqScriptCreate. Now we can test it.  The SQL you use to run rqGroupEval uses the Oracle database pipeline table function syntax.  The first argument to irr_dataGroupEval is a cursor defining our input.  You can add additional where clauses and subqueries to this cursor as appropriate.  The second argument is any additional inputs to the R function.  The third argument is the text of a dummy select statement.  The dummy select statement is used by the database to identify the columns and datatypes to expect the R function to return.  The fourth argument is the column of the input table to split/group by.  The final argument is the name of the R function as you defined it when you called rqScriptCreate(). The Real-World Results In our real customer proof-of-concept, we had more sophisticated calculation requirements than shown in this simplified blog example.  For instance, we had to perform the rate of return calculations for 5 separate time periods, so the R code was enhanced to do so.  In addition, some accounts needed a time-weighted rate of return to be calculated, so we extended our approach and added an R function to do that.  And finally, there were also a few more real-world data irregularities that we needed to account for, so we added logic to our R functions to deal with those exceptions.  For the full-scale customer test, we loaded the customer data onto a Half-Rack Exadata X2-2 Database Machine.  As our half-rack had 48 physical cores (and 96 threads if you consider hyperthreading), we wanted to take advantage of that CPU horsepower to speed up our calculations.  To do so with ORE, it is as simple as leveraging the Oracle Database Parallel Query features.  Let’s look at the SQL used in the customer proof: Notice that we use a parallel hint on the cursor that is the input to our rqGroupEval function.  That is all we need to do to enable Oracle to use parallel R engines. Here are a few screenshots of what this SQL looked like in the Real-Time SQL Monitor when we ran this during the proof of concept (hint: you might need to right-click on these images to be able to view the images full-screen to see the entire image): From the above, you can notice a few things (numbers 1 thru 5 below correspond with highlighted numbers on the images above.  You may need to right click on the above images and view the images full-screen to see the entire image): The SQL completed in 110 seconds (1.8minutes) We calculated rate of returns for 5 time periods for each of 911k accounts (the number of actual rows returned by the IRRSTAGEGROUPEVAL operation) We accessed 103m rows of detailed cash flow/market value data (the number of actual rows returned by the IRR_STAGE2 operation) We ran with 72 degrees of parallelism spread across 4 database servers Most of our 110seconds was spent in the “External Procedure call” event On average, we performed 8,200 executions of our R function per second (110s/911k accounts) On average, each execution was passed 110 rows of data (103m detail rows/911k accounts) On average, we did 41,000 single time period rate of return calculations per second (each of the 8,200 executions of our R function did rate of return calculations for 5 time periods) On average, we processed over 900,000 rows of database data in R per second (103m detail rows/110s) R + Oracle R Enterprise: Best of R + Best of Oracle Database This blog post series started by describing a real customer problem: how to perform a lot of calculations on a lot of data in a short period of time.  While standard R proved to be a very good fit for writing the necessary calculations, the challenge of working with a lot of data in a short period of time remained. This blog post series showed how Oracle R Enterprise enables R to be used in conjunction with the Oracle Database to overcome the data volume and performance issues (as well as simplifying the operations and security issues).  It also showed that we could calculate 5 time periods of rate of returns for almost a million individual accounts in less than 2 minutes. In a future post, we will take the same R function and show how Oracle R Connector for Hadoop can be used in the Hadoop world.  In that next post, instead of having our data in an Oracle database, our data will live in Hadoop and we will how to use the Oracle R Connector for Hadoop and other Oracle Big Data Connectors to move data between Hadoop, R, and the Oracle Database easily.

    Read the article

  • Leveraging Code in Ever Bigger Games

    - by ashes999
    Summary: The same way that I continually build complex engines and libraries within a single platform and technology to allow me to build increasingly bigger and better games, how to continue this when development crosses into different platforms? If I switch platforms, how do I leverage past code and experiences? Games are hard to build. Big games are even harder to build. I've decided that to be able to make big games, I need to start building smaller games, and building up an asset base of code, assets (graphics, sounds), tools, and most importantly, game engines, so that I can eventually get there. One game at a time. Let me give an analogy. To build an MMO 3D RPG, I would approach this by building and releasing small games with increasingly more features. This could entail, for example: A simple 2D game A tile-based game A game with RPG elements (items, equipment, monsters, battle) A full-fledged RPG A 3D RPG The problem now is if I have to change platforms or tools, I don't know how to leverage past code-bases (and experience) to start with a mature product. Right now, I'm writing Silverlight (FlatRedBall) games. Let's say I stick with this for ten years, and then suddenly decide to write a PS6 game, which is in a different programming language entirely. Granted, I have ten years of game-development experience (and correspondingly ten years of professional software development experience from my day job) to back me up. But I would still like some way to transplant that 2D RPG engine into the new programming language, or else leverage it somehow. Is this even possible? What are my options?

    Read the article

  • Hi Availability blog posts on TechNet UK

    - by Testas
    Back in April I started a blog series on SQL Server High Availability BI.These are currently hosted on Technet UKPart Ihttp://blogs.technet.com/b/uktechnet/archive/2012/05/03/guest-post-seven-worlds-will-collide-high-availability-bi-is-not-such-a-distant-sun.aspxPart IIhttp://blogs.technet.com/b/uktechnet/archive/2012/05/30/guest-post-providing-the-foundation-for-a-high-availability-bi-infrastructure.aspxPart IIIhttp://blogs.technet.com/b/uktechnet/archive/2012/07/16/guest-post-part-3-highly-available-bi-me-myself-and-i.aspxThe final part will be released via Technet in a couple of weeksThanksChris

    Read the article

  • Redirect anything within /attachments/ directory up one level but also removing anything after /attachments/

    - by Rich Staats
    WordPress creates an attachment page for any images uploaded through the post editor and "attached" to that post. After migrating a site, those attachment pages no longer exist, and we now have around 1000 links pointed at 404's. So, I was looking to find a way to do a redirect for any url that has /attachement/ in it's string and then push that url back up one level (which happens to be the post page). so for instance: http://mysite.com/2012/news/blog-post-title/attachment/image-page/ (which doesnt exist) will go to http://mysite.com/2012/news/blog-post-title/ (which does exist). Along with redirecting up one level, I also need to remove anything after /attachment/ (in this case the "image-page." Any suggestions? Thanks in advance

    Read the article

  • Daily Blog Archives and Duplicate Content

    - by nemmy
    A few weeks back I realised that my blog software was creating daily post archives. Which basically resulted in duplicate content especially if I only had one post a day. The situation is something like this: www.sitename.com/blog/archives/2013/06/01 - daily archive for 1 June 2013 www.sitename.com/blog/archives/2013/06/my-post-name.html So, here we have two pages that are basically identical except the daily archive has some meaningless title like "Daily Archive for 1 June 2003". And I have no control over which content Google decides is the primary content. It's quite possible (and likely) that the daily archive could be the "primary" content and the actual post itself the "duplicate". Once I realised it was doing this I modified the daily archive template to include <meta name="robots" content="noindex"> Here we are a few weeks later and I still see some daily archives coming up in Google search results. I realise some of those deep pages might not be crawled yet but I am worried that the original post (which should be the PRIMARY content) has been marked duplicate content by Google. Now I've no indexed the daily archives I might end up with no indexed content AND the original articles still flagged as duplicates. And nothing will show up in search at all. Have I screwed myself here or is there a way out?

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

    - by Hugo Kornelis
    In a previous blog post , I demonstrated just how much you can hurt your performance by encapsulating expressions and computations in a user-defined function (UDF). I focused on scalar functions that didn’t include any data access. In this post, I will complete the discussion on scalar UDFs by covering the effect of data access in a scalar UDF. Note that, like the previous post, this all applies to T-SQL user-defined functions only. SQL Server also supports CLR user-defined functions (written in...(read more)

    Read the article

  • Grub menu will not show the first time I try to boot my ubuntu server 12.04 after it is shutdown for a long time

    - by user211477
    I am running into a booting issue after installing Ubuntu Server 12.04 LTS. Following is the symptom of the problem. SYSTEM DESCRIPTION: Dual core AMD Athlon 64 3 Disks: two SATA (out of which one is SSD) and one PATA. Using LVM for disk partition management. /boot is not under LVM rest of the partitions are. / is on the SSD BIOS boot sequence is correct and points to the disk with /boot and boot loader is installed on this disk. SYMPTOMS: POST messages Blinking cursor on first line then moves to second line Screen flickers then becomes black Everything is unresponsive, hard reboot POST messages will not show up on screen. Monitor displays powersave message Force shutdown machine again. Shutoff power to machine for a few minutes. Restart machine. POST message show up. Grub menu shows up Ubuntu server 12.04 boots normally. From now on Ubuntu server boots normally until machine is shutdown for a long time (for example, 30 mins) Repeat steps 1 through 13 once the machine is started after a long time. WHAT DID I TRY? I read several posts and have tried: radeon.modeset=0 setting the gfxmode edd=0 nolacpi boot-repair Nothing seems to work. In my search I did see only one post with this same symptom. Unfortunately, I am not being able to locate that post anymore. The interesting fact is that with this same machine configuration, if I install Ubuntu Desktop 12.04 then everything works fine. Any help will be appreciated.

    Read the article

  • Why does python easy install give me "permission denied" errors?

    - by Golden Sinha
    When i try to install program in ubuntu 12.04 it shows the error. program 1 : home@home-Compaq-610:~/Desktop$ python setup.py install running install running build running build_py creating build creating build/lib.linux-i686-2.7 copying Calculator.py - build/lib.linux-i686-2.7 running install_lib copying build/lib.linux-i686-2.7/Calculator.py - /usr/local/lib/python2.7/dist-packages error: /usr/local/lib/python2.7/dist-packages/Calculator.py: Permission denied . program 2 : home@home-Compaq-610:~/Desktop$ sudo chmod +x Moto.bin [sudo] password for home: home@home-Compaq-610:~/Desktop$ it shows like this but it do not install the program. program 3 : home@home-Compaq-610:~/Desktop$ python setup.py install [ERROR] wxPython2.8 is required. how to install wxPython2.8 please tell. if i try to install this program using easy_install it shows like this. home@home-Compaq-610:~/Desktop$ easy_install editra error: can't create or remove files in install directory The following error occurred while trying to add or remove files in the installation directory: [Errno 13] Permission denied: '/usr/local/lib/python2.7/dist-packages/test-easy-install-6778.pth' The installation directory you specified (via --install-dir, --prefix, or the distutils default setting) was: /usr/local/lib/python2.7/dist-packages/ Perhaps your account does not have write access to this directory? If the installation directory is a system-owned directory, you may need to sign in as the administrator or "root" account. If you do not have administrative access to this machine, you may wish to choose a different installation directory, preferably one that is listed in your PYTHONPATH environment variable. For information on other options, you may wish to consult the documentation at: http://packages.python.org/distribute/easy_install.html Please make the appropriate changes for your system and try again. home@home-Compaq-610:~/Desktop$ please help me . please tell how to install programs..

    Read the article

  • Predicate delegate in C#

    - by Jalpesh P. Vadgama
    I am writing few post on different type of delegates and and this post also will be part of it. In this post I am going to write about Predicate delegate which is available from C# 2.0. Following is list of post that I have written about delegates. Delegates in C#. Multicast delegates in C#. Func delegate in C#. Action delegate in C#. Predicate delegate in C#: As per MSDN predicate delegate is a pointer to a function that returns true or false and takes generics types as argument. It contains following signature. Predicate<T> – where T is any generic type and this delegate will always return Boolean value. The most common use of a predicate delegate is to searching items in array or list. So let’s take a simple example. Following is code for that. Read More

    Read the article

  • CodePlex Daily Summary for Friday, February 18, 2011

    CodePlex Daily Summary for Friday, February 18, 2011Popular ReleasesCatel - WPF and Silverlight MVVM library: 1.2: Catel history ============= (+) Added (*) Changed (-) Removed (x) Error / bug (fix) For more information about issues or new feature requests, please visit: http://catel.codeplex.com =========== Version 1.2 =========== Release date: ============= 2011/02/17 Added/fixed: ============ (+) DataObjectBase now supports Isolated Storage out of the box: Person.Save(myStream) stores a whole object graph in Silverlight (+) DataObjectBase can now be converted to Json via Person.ToJson(); (+)...Game Files Open - Map Editor: Game Files Open - Map Editor v1.0.0.0 Beta: Game Files Open - Map Editor beta v1.0.0.0Image.Viewer: 2011: First version of 2011Silverlight Toolkit: Silverlight for Windows Phone Toolkit - Feb 2011: Silverlight for Windows Phone Toolkit OverviewSilverlight for Windows Phone Toolkit offers developers additional controls for Windows Phone application development, designed to match the rich user experience of the Windows Phone 7. Suggestions? Features? Questions? Ask questions in the Create.msdn.com forum. Add bugs or feature requests to the Issue Tracker. Help us shape the Silverlight Toolkit with your feedback! Please clearly indicate that the work items and issues are for the phone t...VsTortoise - a TortoiseSVN add-in for Microsoft Visual Studio: VsTortoise Build 29 Beta: Note: This release does not work with custom VsTortoise toolbars. These get removed every time when you shutdown Visual Studio. (#7940) Build 29 (beta)New: Added VsTortoise Solution Explorer integration for Web Project Folder, Web Folder and Web Item. Fix: TortoiseProc was called with invalid parameters, when using TSVN 1.4.x or older #7338 (thanks psifive) Fix: Add-in does not work, when "TortoiseSVN/bin" is not added to PATH environment variable #7357 Fix: Missing error message when ...Sense/Net CMS - Enterprise Content Management: SenseNet 6.0.3 Community Edition: Sense/Net 6.0.3 Community Edition We are happy to introduce you the latest version of Sense/Net with integrated ECM Workflow capabilities! In the past weeks we have been working hard to migrate the product to .Net 4 and include a workflow framework in Sense/Net built upon Windows Workflow Foundation 4. This brand new feature enables developers to define and develop workflows, and supports users when building and setting up complicated business processes involving content creation and response...thinktecture WSCF.blue: WSCF.blue V1 Update (1.0.11): Features Added a new option that allows properties on data contract types to be marked as virtual. Bug Fixes Fixed a bug caused by certain project properties not being available on Web Service Software Factory projects. Fixed a bug that could result in the WrapperName value of the MessageContractAttribute being incorrect when the Adjust Casing option is used. The menu item code now caters for CommandBar instances that are not available. For example the Web Item CommandBar does not exist ...Document.Editor: 2011.5: Whats new for Document.Editor 2011.5: New export to email New export to image New document background color Improved Tooltips Minor Bug Fix's, improvements and speed upsTerminals: Version 2 - RC1: The "Clean Install" will overwrite your log4net configuration (if you have one). If you run in a Portable Environment, you can use the "Clean Install" and target your portable folder. Tested and it works fine. Changes for this release: Re-worked on the Toolstip settings are done, just to avoid the vs.net clash with auto-generating files for .settings files. renamed it to .settings.config Packged both log4net and ToolStripSettings files into the installer Upgraded the version inform...AllNewsManager.NET: AllNewsManager.NET 1.3: AllNewsManager.NET 1.3. This new version provide several new features, improvements and bug fixes. Some new features: Online Users. Avatars. Copy function (to create a new article from another one). SEO improvements (friendly urls). New admin buttons. And more...Facebook Graph Toolkit: Facebook Graph Toolkit 0.8: Version 0.8 (15 Feb 2011)moved to Beta stage publish photo feature "email" field of User object added new Graph Api object: Group, Event new Graph Api connection: likes, groups, eventsDJME - The jQuery extensions for ASP.NET MVC: DJME2 -The jQuery extensions for ASP.NET MVC beta2: The source code and runtime library for DJME2. For more product info you can goto http://www.dotnetage.com/djme.html What is new ?The Grid extension added The ModelBinder added which helping you create Bindable data Action. The DnaFor() control factory added that enabled Model bindable extensions. Enhance the ListBox , ComboBox data binding.Jint - Javascript Interpreter for .NET: Jint - 0.9.0: New CLR interoperability features Many bugfixesBuild Version Increment Add-In Visual Studio: Build Version Increment v2.4.11046.2045: v2.4.11046.2045 Fixes and/or Improvements:Major: Added complete support for VC projects including .vcxproj & .vcproj. All padding issues fixed. A project's assembly versions are only changed if the project has been modified. Minor Order of versioning style values is now according to their respective positions in the attributes i.e. Major, Minor, Build, Revision. Fixed issue with global variable storage with some projects. Fixed issue where if a project item's file does not exist, a ...Coding4Fun Tools: Coding4Fun.Phone.Toolkit v1.1: Coding4Fun.Phone.Toolkit v1.1 release. Bug fixes and minor feature requests addedTV4Home - The all-in-one TV solution!: 0.1.0.0 Preview: This is the beta preview release of the TV4Home software.Finestra Virtual Desktops: 1.2: Fixes a few minor issues with 1.1 including the broken per-desktop backgrounds Further improves the speed of switching desktops A few UI performance improvements Added donations linksNuGet: NuGet 1.1: NuGet is a free, open source developer focused package management system for the .NET platform intent on simplifying the process of incorporating third party libraries into a .NET application during development. This release is a Visual Studio 2010 extension and contains the the Package Manager Console and the Add Package Dialog. The URL to the package OData feed is: http://go.microsoft.com/fwlink/?LinkID=206669 To see the list of issues fixed in this release, visit this our issues listEnhSim: EnhSim 2.4.0: 2.4.0This release supports WoW patch 4.06 at level 85 To use this release, you must have the Microsoft Visual C++ 2010 Redistributable Package installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=A7B7A05E-6DE6-4D3A-A423-37BF0912DB84 To use the GUI you must have the .NET 4.0 Framework installed. This can be downloaded from http://www.microsoft.com/downloads/en/details.aspx?FamilyID=9cfb2d51-5ff4-4491-b0e5-b386f32c0992 Changes since 2.3.0 - Upd...PDF Rider: PDF Rider 0.5.1: Changes from the previous version * Use dynamic layout to better fit text in other languages * Includes French and Spanish localizations Prerequisites * Microsoft Windows Operating Systems (XP - Vista - 7) * Microsoft .NET Framework 3.5 runtime * A PDF rendering software (i.e. Adobe Reader) that can be opened inside Internet Explorer. Installation instructionsChoose one of the following methods: 1. Download and run the "pdfRider0.5.1-setup.exe" (reccomended) 2. Down...New ProjectsAbstractSpoon: Development Code by AbstractSpoonBetchRenamer: ????????ChromeTabControl: I want to create wpf tab control. It will have same behavior that chrome.CLASonline: CS 307 Software Engineering - Purdue University A web based social and collaborative learning system.ElearningProject: ELearning TutorialEPICS .NET - Experimental Physics and Industrial Control System for .NET: EPICS .NET is the Experimental Physics and Industrial Control System for .NET Framework 4.0 and above. Written in C#, this control toolkit consists of three sub projects: * EPICS .NET Library, * Virtual Accelerator: Demonstrates full capabilities of the library, * EPICS SimulatorException Manager: Having trouble with unhandled exceptions? Exception Manager will catch these exceptions for you and log them, and then continue running the program. You can choose whether or not to display a dialog box. Only invoked when *not* running from the debugger (Run without Debugging)FileTransferTool: The program is a file transfer client, it monitor one or several local directories, verify,ftp and backup files found to the directory or ftp server you assign. the program is developed by c# + .framework 2.0(to support previous windows version). Hope it can help.httpdSharp: Simple multi-threaded console http server written in C# and .NET 2.0. Simple configuration of wwwroot folder, port and mime-types served. Useful for serving static content when you are in a hurry.Image.Viewer: Basic Ribbon based image viewer for Windows XP, Vista and Windows 7.Imtihan: Imtihan is an online assessment system (OAS).Iphone: Project about I-PhotoKunalPishewsccsemb: KunalPishewsccsembMAT04 Integrationsprojekt - Stadt- und Sehenswürdigkeitenführer Bern: Für die Stadt Bern soll ein "Stadt- und Sehenswürdigkeitenführer" für Smartphones implementiert werden. Touristen und Besuchern sollen die Sehenswürdigkeiten von Bern näher gebracht, sowie das Zurechtfinden in der Altstadt erleichtert werden.MediaBrowser Silverlight: MediaBrowser Silverlight is a small application designed with Silverlight in an educational purpose. This application allows you to consult a series of media (Movies, Albums, Images, Books) and to administer them.MovieCalc: A small tool to calc the bitrate of a movie with given audio bitrate and destination size of the movie (divx, xvid)MPC Pattern for Microsoft Silverlight 4.0: If you have struggled with MVVM in Silverlight line of business applications and you want a good framework for building an application, MPC is for you. MPC is a Model, ViewModel, Presenter and Controller pattern enhanced with XAML defined States, Actions, and Async WCF.News Man: Rss feed News readerOpenQuestions: OpenQuestions is the leading open source source for exam simulators. Main features: * All type of questions supported (single choice, multiple choice, open answers, matching, fill the gaps, etc) * Customisable appearance (look and feel) with themes. * Multi-lingual support.Ordered images loading: Ordered image loading controls enables you to load images on pages in order you specify. It is nice for sites with lot of images where you want to control which images should be loaded first. It is developed using ASP.NET AJAX Extensions and jQuery.Over the fence: Share your gardening tips. This is a community site for gardeners to share their experiences. Discuss your successes and failures. Swap tips. Which plants grow well in your soil? Where is the best place to source plants? What are your favourites?Phoenix iBooking: Phoenix iBooking is an appointment management system. For salons, sports centers etc. It was originally written in VB .NET as a salon booking and till system. This project will see the conversion to C# .NET 4 and removal of the till functionality.PointlessBends: Simply move the four points around the white area and waste time! Yes, that’s right, its pointless!PRISMvvM: MvvM guidance and framework built on top of the PRISM framework. Makes it easier for developers to properly utilize PRISM to achieve best practices in creating a Silverlight project with MVVM. Sponsored and written by: http://www.architectinginnovation.comrsvp: Projectwork on the IT University in Copenhagen, building a survey system.SharePoint 2010 Silverlight Web Part JavaScript Bridge: This is a project template containing a number of base classes and JavaScript which allows SharePoint 2010 Silverlight web parts to communicate with each other inside the browser. It provides Silverlight web parts with the functionality normal web parts get from interfaces.StatlightTeamBuild: StatlightTeamBuild is a build activity plugin for TFS build 2010. The unittest results, generated by statlight, are processed and publish to TFS. After which, the results are shown in your build summary. TFS to TeamCity Build Notification Plugin: Have you ever wanted to turn VCS polling off? TFS to TeamCity Build Notification Plugin is a tool that will initiate a build request when your source code is checked in. The only configuration includes deploying the notification website and supplying your VCS roots to notify .tipolog: tipologTower Defense 3D with C# and XNA: A classical Tower Defense but in 3D. Developped in C# and using XNA, this game is aimed to be released on both Windows and Xbox 360. This project is a part of a course for the 1st y of IT MASTER in Besancon, France.Utility4Net: some base class such as xml,string,data,secerity,web... etc.. under Microsoft.NET Framework 4.0Windows Azure Starter Kit for Java: This starter kit was designed to work as a simple command line build tool or in the Eclipse integrated development environment (IDE) to help Java developers deploy their applications to the Windows Azure cloud.WSCCSemesterB: Web Scripting Semester BXaml Physics: Xaml Physics makes it possible to make a physics simulation with only xaml code. It is a wrapper around the Farseer Physics Engine.

    Read the article

  • Why are there two different kinds of linking, i.e. static and dynamic?

    - by davidk01
    I've been bitten for the n-th time now by a library mismatch between a build and deployment environment. The build environment had libruby.so.2.0 and the deployment environment had libruby.a. One ruby was built with RVM, the other was built with ruby-build. The reason I ran into a problem was because zookeeper was compiled in a build environment that had the shared library but the deployment environment only had the static library. In all the years I've been writing application code I have never once wished that the binaries I was using where linked against shared objects. What is the reason the dichotomy persists to this day on modern operating systems?

    Read the article

  • You made it through the Interview, Now What?

    - by Jonathan Kehayias
    This is somewhat of a continuation post to my previous blog post, “Some thoughts on Interviweing…” .  Now that you survived the interview process, what do you do?  This is a common area of discomfort for people interviewing for  any kind of job, not just positions dealing with SQL Server.  In this post I’d like to focus on a topic that I like to refer to as “Post Interview Etiquette” and how it might impact your ability to get hired for a position.  Whether or not to follow-up...(read more)

    Read the article

  • Package Version Numbers, why are they so important

    - by Chris W Beal
    One of the design goals of IPS has been to allow people to easily move forward to a supported "Surface" of component. That is to say, when you  # pkg update your system, you get the latest set of components which all work together, based on the packages you already have installed. During development, this has meant simply you update to the latest "build" of the components. (During development, we build everything and publish everything every two weeks). Now we've released Solaris 11 using the IPS technologies, things are a bit more complicated. We need to be able to reflect all the types of Solaris release we are doing. For example Solaris Development builds, Solaris Update builds and "Support Repository Updates" (the replacement for patches) in the version scheme. So simply saying "151" as the build number isn't sufficient to articulate what you are running, or indeed what is available to update to In my previous blog post I talked about creating your own package, and gave an example FMRI of pkg://tools/[email protected],0.5.11-0.0.0 But it's probably more instructive to look at the FMRI of a Solaris package. The package "core-os" contains all the common utilities and daemons you need to use Solaris.  $ pkg info core-os Name: system/core-os Summary: Core Solaris Description: Operating system core utilities, daemons, and configuration files. Category: System/Core State: Installed Publisher: solaris Version: 0.5.11 Build Release: 5.11 Branch: 0.175.0.0.0.2.1 Packaging Date: Wed Oct 19 07:04:57 2011 Size: 25.14 MB FMRI: pkg://solaris/system/[email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z The FMRI is what we will concentrate on here. In this package "solaris" is the publisher. You can use the pkg publisher command to see where the solaris publisher gets it's bits from $ pkg publisher PUBLISHER TYPE STATUS URI solaris origin online http://pkg.oracle.com/solaris/release/ So we can see we get solaris packages from pkg.oracle.com.  The package name is system/core-os. These can be arbitrary length, just to allow you to group similar packages together. Now on the the interesting? bit, the versions, everything after the @ is part of the version. IPS will only upgrade to a "higher" version. [email protected],5.11-0.175.0.0.0.2.1:20111019T070457Z core-os = Package Name0.5.11 = Component - in this case we're saying it's a SunOS 5.11 package, = separator5.11 = Built on version - to indicate what OS version you built the package on- = another separator0.175.0.0.0.2.1 = Branch Version : = yet another separator20111019T070457Z = Time stamp when the package was published So from that we can see the Branch Version seems rather complex. It is necessarily so, to allow us to describe the hierachy of releases we do In this example we see the following 0.175: is known as the trunkid, and is incremented each build of a new release of Solaris. During Solaris 11 this should not change  0: is the Update release for Solaris. 0 for FCS, 1 for update 1 etc 0: is the SRU for Solaris. 0 for FCS, 1 for SRU 1 etc 0: is reserved for future use 2: Build number of the SRU 1: Nightly ID - only important for Solaris developersTake a hypothetical example [email protected],5.11-0.175.1.5.0.4.1:<something> This would be build 4 of SRU 5 of Update 1 of Solaris 11 This is actually documented in a MOS article 1378134.1 Which you can read if you have a support contract.

    Read the article

  • SQL SERVER Improve Performance by Reducing IO Creating Covered Index

    This blog post is in the response of the T-SQL Tuesday #004: IO by Mike Walsh. The subject of this month is IO. Here is my quick blog post on how Cover Index can Improve Performance by Reducing IO.Let us kick off this post with disclaimers about Index. Index is a very complex subject and [...]...Did you know that DotNetSlackers also publishes .net articles written by top known .net Authors? We already have over 80 articles in several categories including Silverlight. Take a look: here.

    Read the article

  • T-SQL User-Defined Functions: the good, the bad, and the ugly (part 2)

    - by Hugo Kornelis
    In a previous blog post , I demonstrated just how much you can hurt your performance by encapsulating expressions and computations in a user-defined function (UDF). I focused on scalar functions that didn’t include any data access. In this post, I will complete the discussion on scalar UDFs by covering the effect of data access in a scalar UDF. Note that, like the previous post, this all applies to T-SQL user-defined functions only. SQL Server also supports CLR user-defined functions (written in...(read more)

    Read the article

  • Cumulative Update #5 is available for SQL Server 2012 RTM

    - by AaronBertrand
    Microsoft has released Cumulative Update #5 for SQL Server 2012 RTM. Note this is *not* a cumulative update for Service Pack 1. So if your build # is >= 11.0.3000, you should not be installing this update. KB Article: KB #2777772 Build # 11.0.2395 28 fixes at the time of writing Relevant for builds 11.0.2100 -> 11.0.3329. Do not attempt to install on SQL Server 2012 SP1 (any build >= 11.0.3000) or any previous version of SQL Server....(read more)

    Read the article

  • Notepad++ used as DAX editor

    - by Marco Russo (SQLBI)
    If you use PowerPivot and write some DAX formula, don't miss this post on PowerPivotPro blog - if you want to get an external editor for your DAX formula, you can use Notepad++ for free - and adding the customization described in this post by Colin Banfield, you will get function auto-complete and tooltips. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!...(read more)

    Read the article

  • Tag link suggestion plugin for wordpress?

    - by Emerson
    Hi, every time I write a post I make sure I add links to wordsthat I have tags for. For example: "The economy of Brazil has improved in the last few years" this ensure that when people re-post my content, a lot of back-links will be created to my tags. This is quite a lot of work to do manually for every post. It would be cool if there was a plugin that would suggest tags to be applied when they match existing words in the text of the post. Is there such a thing?

    Read the article

  • General Availability: Simplified User Experience Design Patterns eBook

    - by ultan o'broin
    Karen Scipi (@karenscipi) writes: The Oracle Applications User Experience team is delighted to announce that our Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook is available for free. Working with publishers McGraw-Hill, we're pleased to make the eBook available in EPUB (for use on Apple iOS devices), MOBI (ideal for Amazon Kindle), and PDF (for anything with Adobe Reader) versions. The Simplified User Experience Design Patterns for the Oracle Applications Cloud Service eBook We’re sharing the same user experience design patterns, and their supporting guidance on page types and Oracle ADF components that Oracle uses to build simplified user interfaces (UIs) for the Oracle Sales Cloud and Oracle Human Capital Management (HCM) Cloud, with you so that you can build your own simplified UI solutions. Click to register and download your free copy of the eBook Design patterns offer big wins for applications builders because they are proven, reusable, and based on Oracle technology. They enable developers, partners, and customers to design and build the best user experiences consistently, shortening the application's development cycle, boosting designer and developer productivity, and lowering the overall time and cost of building a great user experience. Developers use the eBook to build their own simplified UIs with Oracle ADF and Oracle JDeveloper Now, Oracle partners, customers and the Oracle ADF community can share further in the Oracle Applications User Experience science and design expertise that brought the acclaimed simplified UIs to the Cloud and they can build their own UIs, simply and productively too!

    Read the article

  • Size doesn't matter

    - by ssoolsma
    Whenever I start a new project I *always* break up my code in different projects. Also known as n-tier solution. The scale of  the project doesn't matter, but make sure that each project is responsible for himself (or herself if you prefer). I make sure that i ....At least thought about how the project should work on the toilet or in a project team meeting.Have a solution directory and create my projects within. I like to name my project (and it's folders by the namespaces). For instance: When i'm creating a piece of (web)software called: ChuckNorris, i always include the software name in my projects. Start off with designing the DataAccess project. I name it: ChuckNorris.DataAccess which lets me easily identify the project incase the project scales alot.Build the classes which represent the database structure. Don't stop working on a class untill it's finished for now. Also, don't over-do the methods. Build stuff only when it's needed, and not think: "Hm, that would be cool to have". Cause most of the time you end up with unused code, and we don't want that.Build a unittest project and make sure you create the folder inside the project that it's testing. So, create the ChuckNorris.DataAccess.UnitTest project inside the folder of the dataaccess project. I would suggest using the nUnit testframework.Incase you though, hm i skip unittest: Don't! Just build it - it will safe you alot of time later onNow, read 5 again. Build that bloody unittest. Don't skip. (i cant emphasize this enough)Now, every class in the dataaccess project is responsible for itself. They don't rely on each other. This is where we use the BusinessLogic project for. Start creating the ChuckNorris.BusinessLogic project. (not inside the data-access project ofcourse, but withing the ChuckNorris folder.Combine stuff from data-access. This usual involves alot of copying the data-access classes and feels silly at first. (we'll get to that later on)Now you come up to a point of creating a service project. You might not always see why to use it, but see it as a way to expose your businesslogic to any application (including your own). Sometimes i use it as a so-called "Factory". Every call goes through this factory, so that's the only thing i'm exposing to any program, and make sure that those methods are the only ones that I allow you to invoke.Build any UI (website, phoneapp, forms application, silverlight, wpf or whatever) and reference it to you service project. Fall in love (cough) with this approach.It's possible that it doesn't seem to make much sense, and very incomplete. Well, that last part is correct. Next post will go in to detail of setting up your Data-Access project and use the entity framework.

    Read the article

  • Sass interface in HTML6 for upload files.

    - by Anirudha
    Originally posted on: http://geekswithblogs.net/anirugu/archive/2013/11/04/sass-interface-in-html6-for-upload-files.aspx[This post is about experiment & imagination] From Windows XP (ever last OS I tried) I have seen a feature that is about send file to pen drive and make shortcut on Desktop. In XP, Win7 (Win8 have this too, not removed) just select the file right click > send to and you can send this file to many places. In my menu it’s show me Skype because I have installed it. this skype confirm that we can add our own app here to make it more easy for user to send file in our app. Nowadays Many people use Cloud or online site to store the file. In case of html5 drag and drop you need to have site opened and have opened that page which is about file upload. You need to select all  and drag and drop. after drag and drop file is simply uploaded to server and site show you on list (if no error happen). but this file upload is seriously not worthy since I have to open the site when I do this operation.   Through this post I want to describe a feature that can make this thing better.  This API is simply called SASS FILE UPLOAD API Through This API when you surf the site and come into file upload page then the page will tell you that we also have SASS FILE API support. Enable it for better result.   How this work This API feature are activated on 2 basis. 1. Feature are disabled by default on site (or you can change it if it’s not) 2. This API allow specific site to upload the files. Files upload may have some rule. For example (minimum or maximum size of file to uploaded, which format the site allowed you to upload). In case of resume site you will be allowed to use .doc (according to code of site)   How browser recognize that Site have SASS service. In HTML source of  the site, the code have a meta tag similar to this <meta name=”sass-upload-api” path=”/upload.json”/> Remember that upload.json is one file that has define the value of many settings {   "cookie_name": "ck_file",   "maximum_allowed_perday": 24,   "allowed_file_extensions","*.png,*.jpg,*.jpeg,*.gif",   "method": [       {           "get": "file/get",           "routing":"/file/get/{fileName}"       },       {           "post": "file/post",           "routing":"/file/post/{fileName}"       },       {           "delete": "file/delete",           "routing":"/file/delete/{fileName}"       },         {           "put": "file/put",           "routing":"/file/put/{fileName}"       },        {           "all": "file/all",           "routing":"/file/all/{fileName}"       }    ] } cookie name is simply a cookie which should be stored in browser and define in json. we define the cookie_name so we can easily share then with service in Windows system. This cookie will be accessible with the service so it’s security based safe. other cookie will not be shared.   The cookie will be post,put, get from this location. The all location will be simply about showing a whole list of file. This will gave a treeview kind of json to show the directories on sever.   for example example.com if you have activated the API with this site then you will seen a send to option in your explorer.exe when you send you will got a windows open which folder you want to use to send the file. The windows will also describe the limit and how much you can upload. This thing never required site to opened. When you upload the file this will be uploaded through FTP protocol. FTP protocol are better for performance.   How this API make thing faster. Suppose you want to ask a question and want to post image. you just do it and get it ready when you open stackoverflow.com now stackoverflow will only tell you which file you want to put on your current question that you asking for. second use is about people use cloud app.   There is no need of drag and drop anymore. we just need to do it without drag and drop it. we doesn’t need to open the site either. This thing is still in experiment level. I will update this post when I got some progress on this API.

    Read the article

< Previous Page | 182 183 184 185 186 187 188 189 190 191 192 193  | Next Page >