Search Results

Search found 44691 results on 1788 pages for 'first'.

Page 115/1788 | < Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >

  • CodePlex Daily Summary for Wednesday, October 03, 2012

    CodePlex Daily Summary for Wednesday, October 03, 2012Popular ReleasesSharePoint Column & View Permission: SharePoint Column and View Permission v1.5: Version 1.5 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerZ3: Z3 4.1.1 source code: Snapshot corresponding to version 4.1.1.DirectX Tool Kit: October 2012: October 2, 2012 Added ScreenGrab module Added CreateGeoSphere for drawing a geodesic sphere Put DDSTextureLoader and WICTextureLoader into the DirectX C++ namespace Renamed project files for better naming consistency Updated WICTextureLoader for Windows 8 96bpp floating-point formats Win32 desktop projects updated to use Windows Vista (0x0600) rather than Windows 7 (0x0601) APIs Tweaked SpriteBatch.cpp to workaround ARM NEON compiler codegen bugHome Access Plus+: v8.1: HAP+ Web v8.1.1003.000079318 Fixed: Issue with the Help Desk and updating a ticket as an admin 79319 Fixed: formatting issue with the booking system admin header 79321 Moved to using the arrow with a circle symbol on the homepage instead of the > and < 79541 Added: 480px wide mobile theme to login page 79541 Added: 480px wide mobile theme to home page 79541 Added: slide events for homepage 79553 Fixed: Booking System Multiple Lesson Bug 79553 Fixed: IE Error Message 79684 Fixed: jQuery issue ...System.Net.FtpClient: System.Net.FtpClient 2012.10.02: This is the first release of the new code base. It is not compatible with the old API, I repeat it is not a drop in update for projects currently using System.Net.FtpClient. New users should download this release. The old code base (Branch: System.Net.FtpClient_1) will continue to be supported while the new code matures. This release is a complete re-write of System.Net.FtpClient. The API and code are simpler than ever before. There are some new features included as well as an attempt at be...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1002.3): Visual Ribbon Editor 1.3.1002.3 What's New: Multi-language support for Labels/Tooltips for custom buttons and groups Support for base language other than English (1033) Connect dialog will not require organization name for ADFS / IFD connections Automatic creation of missing labels for all provisioned languages Minor connection issues fixed Notes: Before saving the ribbon to CRM server, editor will check Ribbon XML for any missing <Title> elements inside existing <LocLabel> elements...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib 2.10: See change-log for the list of new features added and bugs fixedRenameApp: RenameApp 1.0: First release of RenameAppJsonToStaticTypeGenerator: JsonToStaticTypeGenerator 0.1: This is the first alpha release of JsonToStaticTypeGenerator.XiaoKyun: XiaoKyun V1.00: https://xiaokyun.codeplex.com/CatchThatException: Release 1.12: Wow a very fast change and a much better and faster writing to the text fileNaked Objects: Naked Objects Release 5.0.0: Corresponds to the packaged version 5.0.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Major enhancementsNaked Objects 5.0 is desi...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...D3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textNew Projects.Net Exception Reporter: A reusable and extensible exception reporter for Microsoft .NET projects.Aesha Broker: A rich client Auction House Broker application. Built upon Blizzard's new REST API. Provides a client experience which caches historical auction data to provideASP.NET Friendly URLs: A library that enables automatic resolving of extensionless URLs to ASP.NET file-based handlers, e.g. ASPX pages.Astro Power CMS: Astro Power CMS build on GraffitiCMS, a product of Telligent. GraffitiCMS stop develop, I create this project with name is Astro Power CMSaTester: Here is a good place. And now, I can upload my soruce to it. It's very good.Automacao Residencial: O Netduino é uma plataforma onde voce utiliza a linguagem C# para controlar hardware. O objetivo é criar uma estrutura de comunicaçao com o netduino.Derbster: Explore and learn about modern C# architecture and programming by implementing software to support the modern game of roller derby. Dot FPE - A free Format-preserving encryption implementation for .net: There aren't any widely available implementations of a format-preserving encryption in .NET. Thus we aim to be the first!DotNetEx: .NET Framework extended functionality for data access, working with Tasks and asynchronous programming, encryption algorithms such as SkipJack and other stuff.Elemental Development Toolchain (.NET version): A complete toolchain built around the Æthere langauge.elFinder ASP.NET Connector: The one and only .NET connector for the amazing elFinder 2.X web-based file manager. Finally you can manage your files easily right from your browser!Geosynkronisering: Prosjekt for utarbeidelse av spesifikasjoner for grensesnitt som muliggjør synkronisering av datalager med geografisk datainnhold på tvers av ulike plattformerGIII_P1: Jesli wszyscy w Ciebie zwatpili pokaz ze sie mylili !IntroduceCompany: Website gi?i thi?u doanh nghi?p - công ty.JsonToStaticTypeGenerator: This is the JsonToStaticTypeGenerator project that gives the possibility to generate c# classes out of Json data.kwerty: Coming soonMachine Learning: My machine learning project. Just to figure out things...MicroManager: MaNGOS Web-based ManagerMvcContrib3: This is the version of mvccontrib which works with ASP.Net MVC 3Oracle Destination via ODP.Net (Custom Destination Component): SSIS 2008 R2 solution (custom destination component) to write to oracle via ODP.NetOrchard Commerce History with PayPal: Project expands on Nwazet.Commerce module (and is required for this module to work). Adds a purchase history, product role associations, and PayPal.Phoenix Trans: Web Phoenix Trans v?n t?i hàng hóa trong và ngoài nu?cPowerState: PowerState is .NET application for sending Wake-On-LAN (WOL) requests to computers. It can also shutdown, log off and reboot computers using the WMI.RenameApp: RenameApp is a free and very simple to use renaming software for Windows. RenameApp allows you to easily rename files based on the specified criteria and order.Rose-Hulman User Experience Design: This project will contain labs intended for use in Rose-Hulman's Computer Science and Software Engineering department.Server d? phòng: Ðây là server d? phòng, SharePoint BCS External Connector Caching Pattern Library: Library for enabling caching on SharePoint BCS external connectors. Enables BCS .Net Assemblies to be written that are scalable and performant for search.SharpDX.WPF: This projects provides a DirectX 9, DirectX 10 and DirectX 11 support for WPF. The assembly contains DXElement - an easy to use WPF-FrameworkElement.Simple Password Generator Library: The password generator library, written in C#, is a simple assembly which allow generation of passwords with length anywhere from 1-99.SisEagle.NET: Esse sistema foi desenvolvido pra fins de apresentação do TCC referente ao ano de 2012 na UDF-BrasiliaSWebshop: SWebshop is a PHP based webshop system which allows you to insert, edit and delete data easily and is easy to use for customers.Tabular Database Powershell Cmdlets: This project provides a sample of PowerShell Cmdlets to manage Tabular models, from Analysis Services.University timetable using java: the project is using java language to create timetable (full timetable with exam tables and labs tables) and it will be free for all users with sql databaseURLShoter: This project for shorting URL for ASP.NETWeb Input Form Control: This control allow developer to create the input form by configuring the control in html modeWeibo: rtWorkoutMemo: Project descritpion(first draft): Memorise your workout. Keep archive records of your daily trening such: - series of excercise, - quantity of each serie, - weWPF - Automate Acrobat Security Policy: This WPF Tool was created to quickly password protect batches of PDF documents, using a random generator to generate the passwords.XiaoKyun: Hello Page for Web.Z3: Z3 is a high-performance theorem prover being developed at Microsoft Research.

    Read the article

  • CodePlex Daily Summary for Monday, June 20, 2011

    CodePlex Daily Summary for Monday, June 20, 2011Popular ReleasesBlogEngine.NET: BlogEngine.NET 2.5 RC: BlogEngine.NET Hosting - Click Here! 3 Months FREE – BlogEngine.NET Hosting – Click Here! This is a Release Candidate version for BlogEngine.NET 2.5. The most current, stable version of BlogEngine.NET is version 2.0. Find out more about the BlogEngine.NET 2.5 RC here. If you want to extend or modify BlogEngine.NET, you should download the source code. Also, please note at this time the Mercurial source code repository on the Source Code tab is having issues, but will be fixed shortly. I...Microsoft All-In-One Code Framework - a centralized code sample library: All-In-One Code Framework 2011-06-19: Alternatively, you can install Sample Browser or Sample Browser VS extension, and download the code samples from Sample Browser. Improved and Newly Added Examples:For an up-to-date code sample index, please refer to All-In-One Code Framework Sample Catalog. NEW Samples for Windows Azure Sample Description Owner CSAzureStartupTask The sample demonstrates using the startup tasks to install the prerequisites or to modify configuration settings for your environment in Windows Azure Rafe Wu ...Facebook C# SDK: 5.0.40: This is a RTW release which adds new features to v5.0.26 RTW. Support for multiple FacebookMediaObjects in one request. Allow FacebookMediaObjects in batch requests. Removes support for Cassini WebServer (visual studio inbuilt web server). Better support for unit testing and mocking. updated SimpleJson to v0.6 Refer to CHANGES.txt for details. For more information about this release see the following blog posts: Facebook C# SDK - Multiple file uploads in Batch Requests Faceb...Candescent NUI: Candescent NUI (7774): This is the binary version of the source code in change set 7774.Media Companion: MC 3.408b weekly: Some minor fixes..... Fixed messagebox coming up during batch scrape Added <originaltitle></originaltitle> to movie nfo's - when a new movie is scraped, the original title will be set as the returned title. The end user can of course change the title in MC, however the original title will not change. The original title can be seen by hovering over the movie title in the right pane of the main movie tab. To update all of your current nfo's to add the original title the same as your current ...NLog - Advanced .NET Logging: NLog 2.0 Release Candidate: Release notes for NLog 2.0 RC can be found at http://nlog-project.org/nlog-2-rc-release-notesGendering Add-In for Microsoft Office Word 2010: Gendering Add-In: This is the first stable Version of the Gendering Add-In. Unzip the package and start "setup.exe". The .reg file shows how to config an alternate path for suggestion table.TerrariViewer: TerrariViewer v3.1 [Terraria Inventory Editor]: This version adds tool tips. Almost every picture box you mouse over will tell you what item is in that box. I have also cleaned up the GUI a little more to make things easier on my end. There are various bug fixes including ones associated with opening different characters in the same instance of the program. As always, please bring any bugs you find to my attention.Kinect Paint: KinectPaint V1.0: This is version 1.0 of Kinect Paint. To run it, follow the steps: Install the Kinect SDK for Windows (available at http://research.microsoft.com/en-us/um/redmond/projects/kinectsdk/download.aspx) Connect your Kinect device to the computer and to the power. Download the Zip file. Unblock the Zip file by right clicking on it, and pressing the Unblock button in the file properties (if available). Extract the content of the Zip file. Run KinectPaint.exe.CommonLibrary.NET: CommonLibrary.NET - 0.9.7 Beta: A collection of very reusable code and components in C# 3.5 ranging from ActiveRecord, Csv, Command Line Parsing, Configuration, Holiday Calendars, Logging, Authentication, and much more. Samples in <root>\src\Lib\CommonLibrary.NET\Samples CommonLibrary.NET 0.9.7Documentation 6738 6503 New 6535 Enhancements 6583 6737DropBox Linker: DropBox Linker 1.2: Public sub-folders are now monitored for changes as well (thanks to mcm69) Automatic public sync folder detection (thanks to mcm69) Non-Latin and special characters encoded correctly in URLs Pop-ups are now slot-based (use first free slot and will never be overlapped — test it while previewing timeout) Public sync folder setting is hidden when auto-detected Timeout interval is displayed in popup previews A lot of major and minor code refactoring performed .NET Framework 4.0 Client...Terraria World Viewer: Version 1.3.1: Update June 20th Removed "Draw Markers" checkbox from main window because of redundancy/confusing. (Select all or no items from the Settings tab for the same effect.) Fixed Marker preferences not being saved. It is now possible to render more than one map without having to restart the application. World file will not be locked while the world is being rendered. Note: The World Viewer might render an inaccurate map or even crash if Terraria decides to modify the World file during the pro...MVC Controls Toolkit: Mvc Controls Toolkit 1.1.5 RC: Added Extended Dropdown allows a prompt item to be inserted as first element. RequiredAttribute, if present, trggers if no element is chosen Client side javascript function to set/get the values of DateTimeInput, TypedTextBox, TypedEditDisplay, and to bind/unbind a "change" handler The selected page in the pager is applied the attribute selected-page="selected" that can be used in the definition of CSS rules to style the selected page items controls now interpret a null value as an empr...Umbraco CMS: Umbraco CMS 5.0 CTP 1: Umbraco 5 Community Technology Preview Umbraco 5 will be the next version of everyone's favourite, friendly ASP.NET CMS that already powers over 100,000 websites worldwide. Try out our first CTP of version 5 today! If you're new to Umbraco and would like to get a quick low-down on our popular and easy-to-learn approach to content management, check out our intro video here. What's in the v5 CTP box? This is a preview version of version 5 and includes support for the following familiar Umbr...Ribbon Browser for Microsoft Dynamics CRM 2011: Ribbon Browser (1.0.514.30): Initial releaseCoding4Fun Kinect Toolkit: Coding4Fun.Kinect Toolkit: Version 1.0Kinect Mouse Cursor: Kinect Mouse Cursor v1.0: The initial release of the Kinect Mouse Cursor project!patterns & practices: Project Silk: Project Silk Community Drop 11 - June 14, 2011: Changes from previous drop: Many code changes: please see the readme.mht for details. New "Client Data Management and Caching" chapter. Updated "Application Notifications" chapter. Updated "Architecture" chapter. Updated "jQuery UI Widget" chapter. Updated "Widget QuickStart" appendix and code. Guidance Chapters Ready for Review The Word documents for the chapters are included with the source code in addition to the CHM to help you provide feedback. The PDF is provided as a separat...Orchard Project: Orchard 1.2: Build: 1.2.41 Published: 6/14/2010 How to Install Orchard To install Orchard using Web PI, follow these instructions: http://www.orchardproject.net/docs/Installing-Orchard.ashx. Web PI will detect your hardware environment and install the application. Alternatively, to install the release manually, download the Orchard.Web.1.2.41.zip file. http://orchardproject.net/docs/Manually-installing-Orchard-zip-file.ashx The zip contents are pre-built and ready-to-run. Simply extract the contents o...Snippet Designer: Snippet Designer 1.4.0: Snippet Designer 1.4.0 for Visual Studio 2010 Change logSnippet Explorer ChangesReworked language filter UI to work better in the side bar. Added result count drop down which lets you choose how many results to see. Language filter and result count choices are persisted after Visual Studio is closed. Added file name to search criteria. Search is now case insensitive. Snippet Editor Changes Snippet Editor ChangesAdded menu option for the $end$ symbol which indicates where the c...New Projects.NET Dev Tools Dashboard: .NET Dev Tools Dashboard is a WPF application that implements the Prism 4 standards. Modules can be added at runtime and after install of the base Dashboard shell. The first module is a WPF Gui for NAnt based on Colin Svingen's Gui for NAnt open source app for Windows Forms.Chutzpah - A JavaScript Test Runner: Chutzpah helps you integrate JavaScript unit testing into your website. It enables you to run JavaScript unit tests from the command line and from inside of Visual Studio. It also supports running in the TeamCity continuous integration server.Cow connect: Ziel des Projektes Cow connect, ist es ein Tool zu schrieben das Verschiedene Datenbanken, unterschiedlicher Herdenmanagement Tool z.b: Helm Multikuh, Westfalia C21, Holdi,…. Synchron zu halten. oder eine neue Datenbank zu GenerrierenCqrs CWDNUG Demo: A very simple CQRS demo for a presentation at the Canary Wharf Dot Net User Group. Uses Ncqrs, ASP.NET MVC 3. Not best practices - just something to get your teeth into if you're new to CQRS and Event Sourcing.Down123: down123EssionCalendar: EssionCal EssionCalGuardian: Protects reference type parameters and return values from NULL values.Honest Flex Time Management System: A Japanese time management system to handle worker hourlies. Will include some basic reporting functions as well.iStatServer.NET: Server implementation for iPhone iStat app. Windows analogue to iStat Server on OS x or iStatd on linux/solaris. Keolis Service Wrapper: Keolis Service WrapperKinectContrib: A set of extensions and helpers that will facilitate the use of the Microsoft Kinect motion sensor with the Kinect SDK.kingdomwp7: kingdomwp7Laboratório de Engenharia de Software - Projeto: Criado para estudar e aplicar novas tecnologias web.Multi Touch Digit OCR With Matlab Neural Network Wpf Project: Multi Touch Digit OCR Project is a wpf project that works on multi touch devices but it works well on normal devices , this project uses matlab core , that creates 4 feed forward neural network and train them with Back Propagation Algorithm for detecting numbers that you draw .PhoneGap Extensions - Alpha Version: This project was created to solve some issues about PhoneGap. The first issue to solve is how to reuse HTML code through include files. SABnzbd UI: This application is a Windows 7 enabled UI for SABnzbd, so that you don't have to navigate to the website, but instead you can view a slick UI on your desktop.SP:Social: A community project for providing integration points between SharePoint and other social networks like Facebook, LinkedIn, Live etc.Whisk: An extremely simplistic but cross-platform network rending environment for Blender.Windows phone 7, Game framewok: Windows phone 7, Game framewokWP7 - Multi Select Wrapped ListBox: Its a WP 7.1 project which contains a list box that can wrap listBoxItems and can scroll vertically.

    Read the article

  • SQL SERVER – Fix: Error: Compatibility Level Drop Down is Empty

    - by Pinal Dave
    I currently have SQL Server 2012 and SQL Server 2014 both installed on the same machine. My job requires me to travel a lot and I like to travel light. Hence, I have only one computer with all the software installed in it. I can install Virtual Machines but as I was able to install SQL Server 2012 and SQL Server 2014 side by side, I just went ahead with that option. Now one day when I opened up my SQL Server 2014 and went to the properties of the my database, I realized that the dropdown box for Compatibility level is empty. I just can’t select anything there or see what is the current Compatibility level of the database. This was the first time for me so I was bit confused and I tried to search online. Upon searching online I realize that if I was not the first, there are very few questions on this subject on various forums as well as there is no convincing answer to this problem online. That means, I was pretty much first one to face this error. See the image of the situation I was facing. Now I decided to resolve this issue as soon as I can. I spent a few minutes here and there and realize my mistake. I had connected to SQL Server 2014 instance from SQL Server 2012 Management Studio. Hence, I was not able to see any compatibility related settings. Once I connected to SQL Server 2014 instance with SQL Server 2014 Management Studio – this issue was resolved. Well, simple things sometimes keep us very busy. Reference: Pinal Dave (http://blog.sqlauthority.com)Filed under: PostADay, SQL, SQL Authority, SQL Error Messages, SQL Query, SQL Server, SQL Tips and Tricks, T SQL

    Read the article

  • Grub not showing on startup for Windows 8.1 Ubuntu 13.10 Dual boot

    - by driftking96
    K im so a newbie to Ubuntu and i bought a Windows 8 pre-installed laptop last month. I updated to Windows 8.1 and then i thought about installing Ubuntu as a dual boot so i could mess around and learn more about it. So i followed a Youtube tutorial ( http://www.youtube.com/watch?v=dJfTvkgLqfQ ). And i got my stuff working fine. The first few times i booted i got the GRUB menu instead of my default HP Boot OS Manager, and i was able to select my OS. So i went to sleep and the next day i turned on my computer and the GRUB menu did not show up. I tried several times and it didnt automatically show up. In order for me to see the GRUB menu i had to turn on my PC and on start had to press ESC to pause startup and press F9 to get boot options. Then from there i had to pick from OS Boot, Ubuntu, Ubuntu (Yes there were two Ubuntus available) and a default EFI file thingy. When i click the first Ubuntu i get the GRUB Menu (I was too scared to try the second incase i screwed my laptop up) and i can safely load Ubuntu from there and use it (although i do have to increase my brightness everytime i load Ubuntu bec it somehow reduces my brightness to complete darkness on boot) So my problem here is why isnt my GRUB showing on boot, after it worked on the first day? I was on Windows 8.1 while typing this and if you have any questions or answers, i will happily answer or use them as a solution to the best of my abilities. BTW my laptop is a HP TouchSmart j-078CA.

    Read the article

  • Tip 13 : Kill a process using C#, from local to remote

    - by StanleyGu
    1. My first choice is always to try System.Diagnostics to kill a process 2. The first choice works very well in killing local processes. I thought the first choice should work for killing remote process too because process.kill() method is overloaded with second argument of machine name. I pass process name plus remote machine name and call the process.kill() method 3. Unfortunately, it gives me error message of "Feature is not supported for remote machines.". Apparently, you can query but not kill a remote process using Process class in System.Diagnostics. The MSDN library document explicitly states that about Process class: Provides access to local and remote processes and enables you to start and stop local system processes. 4. I try my second choice: using System.Management to kill a process running on a remote machine. Make sure add references to System.Management.dll and System.Management.Instrumentation.dll 5. The second choice works very well in killing a remote process. Just need to make sure the account running your program must be configured to have permission to kill a process running on the remote machine.  

    Read the article

  • Keystone Correction using 3D-Points of Kinect

    - by philllies
    With XNA, I am displaying a simple rectangle which is projected onto the floor. The projector can be placed at an arbitrary position. Obviously, the projected rectangle gets distorted according to the projectors position and angle. A Kinect scans the floor looking for the four corners. Now my goal is to transform the original rectangle such that the projection is no longer distorted by basically pre-warping the rectangle. My first approach was to do everything in 2D: First compute a perspective transformation (using OpenCV's warpPerspective()) from the scanned points to the internal rectangle's points und apply the inverse to the rectangle. This seemed to work but was too slow as it couldn't be rendered on the GPU. The second approach was to do everything in 3D in order to use XNA's rendering features. First, I would display a plane, scan its corners with Kinect and map the received 3D-Points to the original plane. Theoretically, I could apply the inverse of the perspective transformation to the plane, as I did in the 2D-approach. However, in since XNA works with a view and projection matrix, I can't just call a function such as warpPerspective() and get the desired result. I would need to compute the new parameters for the camera's view and projection matrix. Question: Is it possible to compute these parameters and split them into two matrices (view and projection)? If not, is there another approach I could use?

    Read the article

  • Game physics presentation by Richard Lord, some questions

    - by Steve
    I been implementing (in XNA) the examples in this physics presentation by Richard Lord where he discusses various integration techniques. Bearing in mind that I am a newcomer to game physics (and physics in general) I have some questions. 15 slides in he shows ActionScript code for a gravity example and an animation showing a bouncing ball. The ball bounces higher and higher until it is out of control. I implemented the same in C# XNA but my ball appeared to be bouncing at a constant height. The same applies to the next example where the ball bounces lower and lower. After some experimentation I found that if I switched to a fixed timestep and then on the first iteration of Update() I set the time variable to be equal to elapsed milliseconds (16.6667) I would see the same behaviour. Doing this essentially set the framerate, velocity and acceleration to zero for the first update and introduced errors(?) into the algorithm causing the ball's velocity to increase (or decrease) over time. I think! My question is, does this make the integration method used poor? Or is it demonstrating that it is poor when used with variable timestep because you can't pass in a valid value for the first lot of calculations? (because you cannot know the framerate in advance). I will continue my research into physics but can anyone suggest a good method to get my feet wet? I would like to experiment with variable timestep, acceleration that changes over time and probably friction. Would the Time Corrected Verlet be OK for this?

    Read the article

  • Trace flags - TF 1117

    - by Damian
    I had a session about trace flags this year on the SQL Day 2014 conference that was held in Wroclaw at the end of April. The session topic is important to most of DBA's and the reason I did it was that I sometimes forget about various trace flags :). So I decided to prepare a presentation but I think it is a good idea to write posts about trace flags, too. Let's start then - today I will describe the TF 1117. I assume that we all know how to setup a TF using starting parameters or registry or in the session or on the query level. I will always write if a trace flag is local or global to make sure we know how to use it. Why do we need this trace flag? Let’s create a test database first. This is quite ordinary database as it has two data files (4 MB each) and a log file that has 1MB. The data files are able to expand by 1 MB and the log file grows by 10%: USE [master] GO CREATE DATABASE [TF1117]  ON  PRIMARY ( NAME = N'TF1117',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117.mdf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB ), ( NAME = N'TF1117_1',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_1.ndf' ,      SIZE = 4096KB ,      MAXSIZE = UNLIMITED,      FILEGROWTH = 1024KB )  LOG ON ( NAME = N'TF1117_log',      FILENAME = N'C:\Program Files\Microsoft SQL Server\MSSQL12.SQL2014\MSSQL\DATA\TF1117_log.ldf' ,      SIZE = 1024KB ,      MAXSIZE = 2048GB ,      FILEGROWTH = 10% ) GO Without the TF 1117 turned on the data files don’t grow all up at once. When a first file is full the SQL Server expands it but the other file is not expanded until is full. Why is that so important? The SQL Server proportional fill algorithm will direct new extent allocations to the file with the most available space so new extents will be written to the file that was just expanded. When the TF 1117 is enabled it will cause all files to auto grow by their specified increment. That means all files will have the same percent of free space so we still have the benefit of evenly distributed IO. The TF 1117 is global flag so it affects all databases on the instance. Of course if a filegroup contains only one file the TF does not have any effect on it. Now let’s do a simple test. First let’s create a table in which every row will fit to a single page: The table definition is pretty simple as it has two integer columns and one character column of fixed size 8000 bytes: create table TF1117Tab (      col1 int,      col2 int,      col3 char (8000) ) go Now I load some data to the table to make sure that one of the data file must grow: declare @i int select @i = 1 while (@i < 800) begin       insert into TF1117Tab  values (@i, @i+1000, 'hello')        select @i= @i + 1 end I can check the actual file size in the sys.database_files DMV: SELECT name, (size*8)/1024 'Size in MB' FROM sys.database_files  GO   As you can see only the first data file was  expanded and the other has still the initial size:   name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              4 There is also other methods of looking at the events of file autogrows. One possibility is to create an Extended Events session and the other is to look into the default trace file:     DECLARE @path NVARCHAR(260); SELECT    @path = REVERSE(SUBSTRING(REVERSE([path]),          CHARINDEX('\', REVERSE([path])), 260)) + N'log.trc' FROM    sys.traces WHERE   is_default = 1; SELECT    DatabaseName,                 [FileName],                 SPID,                 Duration,                 StartTime,                 EndTime,                 FileType =                         CASE EventClass                                     WHEN 92 THEN 'Data'                                    WHEN 93 THEN 'Log'             END FROM sys.fn_trace_gettable(@path, DEFAULT) WHERE   EventClass IN (92,93) AND StartTime >'2014-07-12' AND DatabaseName = N'TF1117' ORDER BY   StartTime DESC;   After running the query I can see the file was expanded and how long did the process take which might be useful from the performance perspective.    Now it’s time to turn on the flag 1117. DBCC TRACEON(1117)   I dropped the database and recreated it once again. Then I ran the queries and observed the results. After loading the records I see that both files were evenly expanded: name                  Size in MB --------------------- ----------- TF1117                5 TF1117_log            1 TF1117_1              5 I found also information in the default trace. The query returned three rows. The last one is connected to my first experiment when the TF was turned off.  The two rows shows that first file was expanded by 1MB and right after that operation the second file was expanded, too. This is what is this TF all about J  

    Read the article

  • Reminder: Premier Support for EBS 11i ends November 2010

    - by Steven Chan
    Apps sysadmins are going to have a busy year.  If you're still running your E-Business Suite environment on the 10gR2 database, I hope that you're aware that Premier Support for 10.2 ends in July 2010.  But if you're still on Oracle E-Business Suite Release 11i version 11.5.10, the impending end of Premier Support this year on November 30, 2010 is even more important.  Support windows for Oracle E-Business Suite are listed here:Oracle Lifetime Support > "Lifetime Support Policy: Oracle Applications" (PDF)Premier Support runs for five years from a product's first release.  In the case of Oracle E-Business Suite Release 11.5.10, that window was increased to six years in recognition of the challenges that some of you face in justifying major upgrades in today's economy. Here's a graphical summary of the EBS 11.5.10's support stages:First year of Extended Support fees for EBS 11.5.10 waivedRegular readers may recall that fees for the first year of Extended Support for EBS 11.5.10 are waived.  There is nothing that customers need to do to remain fully supported other than keep your support contracts current.  Higher fees for Extended Support will start December 1, 2011 for most platforms.  This is formally documented here:Technical Support Policies > "Oracle's Technical Support Policies" (PDF)

    Read the article

  • MySQL Connect Conference: My Experience

    - by Hema Sridharan
    It was a great experience to attend the MySQL Connect Conference for the first time ever. Personally I was very much enthralled to present about "How to make MySQL Backups" besides attending different sessions to absorb more knowledge about the technical prospects of MySQL. One of the agenda items in my presentation was "MySQL Enterprise Backup" functionality and features. There were total of 40 attendees in the session, who were very much interested about the MySQL Enterprise Backup product and gave positive feedback as well as areas of improvements on our product. Some of our features brought lot of excitement and smile amongst our customers including,1. Performance improvements in MEB 3.8.02. Incremental Base option from MEB 3.7.1 where there is no need to specify the directory name of the previous backup to fetch the lsn values and instead can directly fetch from backup_history table using --incremental-base=history: last_backup3. only-innodb-with-frm option introduced in MEB 3.7 version. A true online hot backup of InnoDB tables.I also attended a session with similar topic "MEB Best Practices" conducted by Sanjay Manwani, where he double clicked all the features and best strategies of backup & restore. I also got an opportunity to attend other sessions including,* Enabling the new generation of web and cloud services with MySQL 5.6 replication* Getting the most out of MySQL with MySQL Workbench* InnoDB compression for OLTP* Scaling for the Web and Cloud with MySQL replication.Above all, had some special moments in the conference including meeting some of the executives / colleagues for the first time f2f. On a whole, the first MySQL Connect conference was a great success in terms of manifesting the features of our products, direct feedback from customer and team building.  We also had some applauding yahoo moments when Tomas Ulin announced different releases including MySQL 5.6 RC, Connector Python 1.0 and ODBC 5.2 release, MySQL Cluster 7.3, additions to MySQL Enterprise edition etc.

    Read the article

  • Do Repeat Yourself in Unit Tests

    - by João Angelo
    Don’t get me wrong I’m a big supporter of the DRY (Don’t Repeat Yourself) Principle except however when it comes to unit tests. Why? Well, in my opinion a unit test should be a self-contained group of actions with the intent to test a very specific piece of code and should not depend on externals shared with other unit tests. In a typical unit test we can divide its code in two major groups: Preparation of preconditions for the code under test; Invocation of the code under test. It’s in the first group that you are tempted to refactor common code in several unit tests into helper methods that can then be called in each one of them. Another way to not duplicate code is to use the built-in infrastructure of some unit test frameworks such as SetUp/TearDown methods that automatically run before and after each unit test. I must admit that in the past I was guilty of both charges but what at first seemed a good idea since I was removing code duplication turnout to offer no added value and even complicate the process when a given test fails. We love unit tests because of their rapid feedback when something goes wrong. However, this feedback requires most of the times reading the code for the failed test. Given this, what do you prefer? To read a single method or wander through several methods like SetUp/TearDown and private common methods. I say it again, do repeat yourself in unit tests. It may feel wrong at first but I bet you won’t regret it later.

    Read the article

  • What's New in 5.6 RC and more from MySQL Connect conference

    - by Rob Young
    Keeping with the tradition of great MySQL Community events, the first annual MySQL Connect conference is now in the books.  It was great to see so many familiar faces in the crowd and at the podium sharing their ideas and thoughts on the evolution of MySQL under Oracle. The headliner of the conference was Tomas' keynote announcement of the fully featured and fully enabled MySQL 5.6 Release Candidate.  This new article on the MySQL DevZone summarizes all of the great new features ready for Community adoption, all MySQL Engineering blogs and where and how to download all of the bits. As always, early adoption and feedback on the 5.6 RC is appreciated and the sooner we get your feedback the sooner we release the "ready for production" sanctioned GA product.    Also available now, Cluster 7.3 provides support for Foreign Keys, node.js NoSQL access to underlying data and a new Auto Installer that helps you quickly and easily get up and running with Cluster 7.2 and 7.3.  The 7.3 downloads are provided in the first 7.3 Development Milestone Release (under "Development Releases" tab) and via the MySQL Labs. Oracle also announced key new additions to MySQL Enterprise Edition: New policy-based compliance Auditing. MySQL Enterprise Edition Audit adds policy-based auditing compliance to existing MySQL applications without the need to change any code.  This new plugin is available for MySQL 5.5.28 and higher; existing MySQL Enterprise Edition customers can download the upgrade from the My Oracle Support portal and all can download for evaluation from Oracle's Software Delivery Cloud. New MySQL Enterprise High Available additions provide even more options for ensuring MySQL applications remain available and running a their peak: Oracle Linux + DRBD Oracle Solaris Clustering for MySQL All in all, the first MySQL Connect conference was a great success and with refinements planned in response to attendee, sponsor and speaker feedback we expect it to grow and improve going forward. As always, thanks for your continued support of MySQL!

    Read the article

  • Find The Bug

    - by Alois Kraus
    What does this code print and why?             HashSet<int> set = new HashSet<int>();             int[] data = new int[] { 1, 2, 1, 2 };             var unique = from i in data                          where set.Add(i)                          select i;   // Compiles to: var unique = Enumerable.Where(data, (i) => set.Add(i));             foreach (var i in unique)             {                 Console.WriteLine("First: {0}", i);             }               foreach (var i in unique)             {                 Console.WriteLine("Second: {0}", i);             }   The output is: First: 1 First: 2 Why is there no output of the second loop? The reason is that LINQ does not cache the results of the collection but it does recalculate the contents for every new enumeration again. Since I have used state (the Hashset does decide which entries are part of the output) I do arrive with an empty sequence since Add of the Hashset will return false for all values I have already passed in leaving nothing to return a second time. The solution is quite simple: Use the Distinct extension method or cache the results by calling .ToList() or ToArray() for the result of the LINQ query. Lession Learned: Do never forget to think about state in Where clauses!

    Read the article

  • Macedonian Code Camp 2011

    - by hajan
    Autumn was filled with lot of conferences, events, speaking engagements and many interesting happenings in Skopje, Macedonia. First at October 20, I was speaking at Microsoft Vizija 9 on topic ASP.NET MVC3 and Razor. One week ago, November 15 I was speaking for first time on topic not related to web development (but still deployment of web apps was part of the demos) on topic “Cloud Computing – Windows Azure” at Microsoft BizSpark Bootcamp. The next event, which is the biggest event by the number of visitors and number of tracks is the Code Camp 2011 event. After we opened the registrations for the event, we sold out (free) 600 tickets in the first 15 hours! We all got astonished by the extremely big number of responses we’ve got… In this event, I can freely say that we expect about 700 attendees to come, and we already have 900+ registered. The event will be held at Saturday, 26 November 2011. At Code Camp 2011, I will speak on topic ASP.NET MVC Best Practices. There are many interesting things to say on this presentation, I will mainly focus on Tips, Tricks, Guidelines and other Practices that I have been using in real-life projects developed by using ASP.NET MVC Framework, with special focus on ASP.NET MVC3 and the next release, ASP.NET MVC4 Developer Preview. There are big number of known local and regional speakers, including 7 MVPs. You can find more info about this event at the official event website: http://codecamp.mkdot.net As for my session, if you have some interesting trick or good practice you have been using in your ASP.NET MVC projects, you can freely share it with me… If I find it interesting and if it’s not part of the current practices I have included for the presentation (I can’t tell you which ones for now… *secret* ;))… I will consider including it in the presentation. Stay tuned for more info soon… Regards, Hajan

    Read the article

  • MVC 4 and the App_Start folder

    - by pjohnson
    I've been delving into ASP.NET MVC 4 a little since its release last month. One thing I was chomping at the bit to explore was its bundling and minification functionality, for which I'd previously used Cassette, and been fairly happy with it. MVC 4's functionality seems very similar to Cassette's; the latter's CassetteConfiguration class matches the former's BundleConfig class, specified in a new directory called App_Start.At first glance, this seems like another special ASP.NET folder, like App_Data, App_GlobalResources, App_LocalResources, and App_Browsers. But Visual Studio 2010's lack of knowledge about it (no Solution Explorer option to add the folder, nor a fancy icon for it) made me suspicious. I found the MVC 4 project template has five classes there--AuthConfig, BundleConfig, FilterConfig, RouteConfig, and WebApiConfig. Each of these is called explicitly in Global.asax's Application_Start method. Why create separate classes, each with a single static method? Maybe they anticipate a lot more code being added there for large applications, but for small ones, it seems like overkill. (And they seem hastily implemented--some declared as static and some not, in the base namespace instead of an App_Start/AppStart one.) Even for a large application I work on with a substantial amount of code in Global.asax.cs, a RouteConfig might be warranted, but the other classes would remain tiny.More importantly, it appears App_Start has no special magic like the other folders--it's just convention. I found it first described in the MVC 3 timeframe by Microsoft architect David Ebbo, for the benefit of NuGet and WebActivator; apparently some packages will add their own classes to that directory as well. One of the first appears to be Ninject, as most mentions of that folder mention it, and there's not much information elsewhere about this new folder.

    Read the article

  • Adding Vertices to a dynamic mesh via Method Call

    - by Raven Dreamer
    I have a C# Struct with a static method, "Get Shape" which populates a List with the vertices of a polyhedron. Method Signature: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) Adding directly to the vertices list (via vertices.Add(vector3) ), the code works as expected, and the new polyhedron appears when I trigger the method. However, I want to do some processing on the vertices I'm adding (a rotation), and the most sensible way I can think to do that is by creating a separate list of Vector3s, and then combining the lists when I'm done. However, vertices.AddRange(newVerts) does not add the shape to the mesh, nor does a foreach loop with verts.Add(vertices[i]). And this is before I've added in any of the processing! I have a feeling this might stem from passing the list of vertices in as a parameter, rather than returning a list and then adding to the vertices in the calling object, but since I'm filling 4 lists, I was trying to avoid having to create a data struct to return all four at once. Any ideas? The working version of the method is reprinted below, in full: public static void GetShape(Block b, int x, int y, int z, List<Vector3> vertices, List<int> triangles, List<Vector2> uvs, List<Vector2> uv2s) { //List<Vector3> vertices = new List<Vector3>(); int l_blockShape = b.blockShape; int l_blockType = b.blockType; //CheckFace checks if the block is empty //if this block is empty, don't draw anything. int vertexIndex; //only y faces need to be hidden. //if((l_blockShape & BlockShape.NegZFace) == BlockShape.NegZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //XY Z+1 face //if((l_blockShape & BlockShape.PosZFace) == BlockShape.PosZFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY face //if((l_blockShape & BlockShape.NegXFace) == BlockShape.NegXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.2f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.2f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZY X+1 face // if((l_blockShape & BlockShape.PosXFace) == BlockShape.PosXFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y + 1, z+.2f)); vertices.Add(new Vector3(x+.8f, y + 1, z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); // first triangle for the face triangles.Add(vertexIndex); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+3); // second triangle for the face triangles.Add(vertexIndex+1); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+3); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX face if((l_blockShape & BlockShape.NegYFace) == BlockShape.NegYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y , z+.8f)); vertices.Add(new Vector3(x+.8f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.2f)); vertices.Add(new Vector3(x+.2f, y , z+.8f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } //ZX + 1 face if((l_blockShape & BlockShape.PosYFace) == BlockShape.PosYFace) { vertexIndex = vertices.Count; //top left, top right, bottom right, bottom left vertices.Add(new Vector3(x+.8f, y+1 , z+.2f)); vertices.Add(new Vector3(x+.8f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.8f)); vertices.Add(new Vector3(x+.2f, y+1 , z+.2f)); // first triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+1); triangles.Add(vertexIndex); // second triangle for the face triangles.Add(vertexIndex+3); triangles.Add(vertexIndex+2); triangles.Add(vertexIndex+1); //UVs for the face uvs.Add( new Vector2(0,1)); uvs.Add( new Vector2(1,1)); uvs.Add( new Vector2(1,0)); uvs.Add( new Vector2(0,0)); //UV2s (lightmapping?) uv2s.Add( new Vector2(0,1)); uv2s.Add( new Vector2(1,1)); uv2s.Add( new Vector2(1,0)); uv2s.Add( new Vector2(0,0)); } }

    Read the article

  • HTML5-MVC application using VS2010 SP1

    - by nmarun
    This is my first attempt at creating HTML5 pages. VS 2010 allows working with HTML5 now (you just need to make a small change after installing SP1). So my Razor view is now a HTML5 page. I call this application - 5Commerce – (an over-simplified) HTML5 ECommerce site. So here’s the flow of the application: home page renders user enters first and last name, chooses a product and the quantity can enter additional instructions for the order place the order user is then taken to another page showing the order details Off to the details. This is what my page looks in Google Chrome 10 beta (or later) soon after it renders. Here are some of the things to observe on this. Look a little closer and you’ll see a border around the first name textbox – this is ‘autofocus’ in action. I’ve set the autofocus attribute on this textbox. So as soon as the page loads, this control gets focus. 1: <input type="text" autofocus id="firstName" class="inputWidth" data_minlength="" 2: data_maxlength="" placeholder="first name" /> See a partially grayed out ‘last name’ text in the second textbox. This is set using a placeholder attribute (see above). It gets wiped out on-focus and improves the UI visuals in general. The quantity textbox is actually a numerical-only textbox. 1: <input type="number" id="quantity" data_mincount="" class="inputWidth" /> The last line is for additional instructions. This looks like a label but it’s content is editable. Just adding the ‘contenteditable’ attribute to the span allow the user to edit the text inside. 1: <span contenteditable id="additionalInstructions" data_texttype="" class="editableContent">select text and edit </span> All of the above is just plain HTML (no lurking javascript acting in here). Makes it real clean and simple. Going more into the HTML, I see that the _Layout.cshtml already is using some HTML5 content. I created my project before installing SP1, so that was the reason for my surprise. 1: <!DOCTYPE html> This is the doctype declaration in HTML5 and this is supported even by IE6 (just take my word on IE6 now, don’t go install it to test it, especially when MS is doing an IE6 countdown). That’s just amazing and extremely easy to read remember and talk about a few less bytes on every call! I modified the rest of my _Layout.cshtml to the below: 1: <!DOCTYPE html> 2: <html> 3: <head> 4: <title>5Commerce - HTML 5 Ecommerce site</title> 5: <link href="@Url.Content("~/Content/Site.css")" rel="stylesheet" type="text/css" /> 6: <script src="@Url.Content("~/Scripts/jquery-1.4.4.min.js")" type="text/javascript"></script> 7: <script src="@Url.Content("~/Scripts/CustomScripts.js")" type="text/javascript"></script> 8: <script type="text/javascript"> 9: $(document).ready(function () { 10: WireupEvents(); 11: }); 12:</script> 13:  14: </head> 15:  16: <body role="document" class="bodybackground"> 17: <header role="heading"> 18: <h2>5Commerce - HTML 5 Ecommerce site!</h2> 19: </header> 20: <section id="mainForm"> 21: @RenderBody() 22: </section> 23: <footer id="page_footer" role="siteBaseInfo"> 24: <p>&copy; 2011 5Commerce Inc!</p> 25: </footer> 26: </body> 27: </html> I’m sure you’re seeing some of the new tags here. To give a brief intro about them: <header>, <footer>: Marks the header/footer region of a page or section. <section>: A logical grouping of content role attribute: Identifies the responsibility of an element. This attribute can be used by screen readers and can also be filtered through jQuery. SP1 also allows for some intellisense in HTML5. You see the other types of input fields – email, date, datetime, month, url and there are others as well. So once my page loads, i.e., ‘on document ready’, I’m wiring up the events following the principles of unobtrusive javascript. In the snippet below, I’m controlling the behavior of the input controls for specific events. 1: $("#productList").bind('change blur', function () { 2: IsSelectedProductValid(); 3: }); 4:  5: $("#quantity").bind('blur', function () { 6: IsQuantityValid(); 7: }); 8:  9: $("#placeOrderButton").click( 10: function () { 11: if (IsPageValid()) { 12: LoadProducts(); 13: } 14: }); This enables some client-side validation to occur before the data is sent to the server. These validation constraints are obtained through a JSON call to the WCF service and are set to the ‘data_’ attributes of the input controls. Have a look at the ‘GetValidators()’ function below: 1: function GetValidators() { 2: // the post to your webservice or page 3: $.ajax({ 4: type: "GET", //GET or POST or PUT or DELETE verb 5: url: "http://localhost:14805/OrderService.svc/GetValidators", // Location of the service 6: data: "{}", //Data sent to server 7: contentType: "application/json; charset=utf-8", // content type sent to server 8: dataType: "json", //Expected data format from server 9: processdata: true, //True or False 10: success: function (result) {//On Successfull service call 11: if (result.length > 0) { 12: for (i = 0; i < result.length; i++) { 13: if (result[i].PropertyName == "FirstName") { 14: if (result[i].MinLength > 0) { 15: $("#firstName").attr("data_minLength", result[i].MinLength); 16: } 17: if (result[i].MaxLength > 0) { 18: $("#firstName").attr("data_maxLength", result[i].MaxLength); 19: } 20: } 21: else if (result[i].PropertyName == "LastName") { 22: if (result[i].MinLength > 0) { 23: $("#lastName").attr("data_minLength", result[i].MinLength); 24: } 25: if (result[i].MaxLength > 0) { 26: $("#lastName").attr("data_maxLength", result[i].MaxLength); 27: } 28: } 29: else if (result[i].PropertyName == "Quantity") { 30: if (result[i].MinCount > 0) { 31: $("#quantity").attr("data_minCount", result[i].MinCount); 32: } 33: } 34: else if (result[i].PropertyName == "AdditionalInstructions") { 35: if (result[i].TextType.length > 0) { 36: $("#additionalInstructions").attr("data_textType", result[i].TextType); 37: } 38: } 39: } 40: } 41: }, 42: error: function (result) {// When Service call fails 43: alert('Service call failed: ' + result.status + ' ' + result.statusText); 44: } 45: }); 46:  47: //.... 48: } Just before the GetValidators() function runs and sets the validation constraints, this is what the html looks like (seen through the Dev tools of Chrome): After the function executes, you see the values in the ‘data_’  attributes. As and when we enter valid data into these fields, the error messages disappear, since the validation is bound to the blur event of the control. There you see… no error messages (well, the catch here is that once you enter THAT name, all errors disappear automatically). Clicking on ‘Place Order!’ runs the SaveOrder function. You can see the JSON for the order object that is getting constructed and passed to the WCF Service. 1: function SaveOrder() { 2: var addlInstructionsDefaultText = "select text and edit"; 3: var addlInstructions = $("span:first").text(); 4: if(addlInstructions == addlInstructionsDefaultText) 5: { 6: addlInstructions = ''; 7: } 8: var orderJson = { 9: AdditionalInstructions: addlInstructions, 10: Customer: { 11: FirstName: $("#firstName").val(), 12: LastName: $("#lastName").val() 13: }, 14: OrderedProduct: { 15: Id: $("#productList").val(), 16: Quantity: $("#quantity").val() 17: } 18: }; 19:  20: // the post to your webservice or page 21: $.ajax({ 22: type: "POST", //GET or POST or PUT or DELETE verb 23: url: "http://localhost:14805/OrderService.svc/SaveOrder", // Location of the service 24: data: JSON.stringify(orderJson), //Data sent to server 25: contentType: "application/json; charset=utf-8", // content type sent to server 26: dataType: "json", //Expected data format from server 27: processdata: false, //True or False 28: success: function (result) {//On Successfull service call 29: window.location.href = "http://localhost:14805/home/ShowOrderDetail/" + result; 30: }, 31: error: function (request, error) {// When Service call fails 32: alert('Service call failed: ' + request.status + ' ' + request.statusText); 33: } 34: }); 35: } The service saves this order into an XML file and returns the order id (a guid). On success, I redirect to the ShowOrderDetail action method passing the guid. This page will show all the details of the order. Although the back-end weightlifting is done by WCF, I did not show any of that plumbing-work as I wanted to concentrate more on the HTML5 and its associates. However, you can see it all in the source here. I do have one issue with HTML5 and this is an existing issue with HTML4 as well. If you see the snippet above where I’ve declared a textbox for first name, you’ll see the autofocus attribute just dangling by itself. It doesn’t follow the xml syntax of ‘key="value"’ allowing users to continue writing badly-formatted html even in the new version. You’ll see the same issue with the ‘contenteditable’ attribute as well. The work-around is that you can do ‘autofocus=”true”’ and it’ll work fine plus make it well-formatted. But unless the standards enforce this, there will be people (me included) who’ll get by, by just typing the bare minimum! Hoping this will get fixed in the coming version-updates. Source code here. Verdict: I think it’s time for us to embrace the new HTML5. Thank you HTML4 and Welcome HTML5.

    Read the article

  • Convert ddply {plyr} to Oracle R Enterprise, or use with Embedded R Execution

    - by Mark Hornick
    The plyr package contains a set of tools for partitioning a problem into smaller sub-problems that can be more easily processed. One function within {plyr} is ddply, which allows you to specify subsets of a data.frame and then apply a function to each subset. The result is gathered into a single data.frame. Such a capability is very convenient. The function ddply also has a parallel option that if TRUE, will apply the function in parallel, using the backend provided by foreach. This type of functionality is available through Oracle R Enterprise using the ore.groupApply function. In this blog post, we show a few examples from Sean Anderson's "A quick introduction to plyr" to illustrate the correpsonding functionality using ore.groupApply. To get started, we'll create a demo data set and load the plyr package. set.seed(1) d <- data.frame(year = rep(2000:2014, each = 3),         count = round(runif(45, 0, 20))) dim(d) library(plyr) This first example takes the data frame, partitions it by year, and calculates the coefficient of variation of the count, returning a data frame. # Example 1 res <- ddply(d, "year", function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(cv.count = cv)   }) To illustrate the equivalent functionality in Oracle R Enterprise, using embedded R execution, we use the ore.groupApply function on the same data, but pushed to the database, creating an ore.frame. The function ore.push creates a temporary table in the database, returning a proxy object, the ore.frame. D <- ore.push(d) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   sd.count <- sd(x$count)   cv <- sd.count/mean.count   data.frame(year=x$year[1], cv.count = cv)   }, FUN.VALUE=data.frame(year=1, cv.count=1)) You'll notice the similarities in the first three arguments. With ore.groupApply, we augment the function to return the specific data.frame we want. We also specify the argument FUN.VALUE, which describes the resulting data.frame. From our previous blog posts, you may recall that by default, ore.groupApply returns an ore.list containing the results of each function invocation. To get a data.frame, we specify the structure of the result. The results in both cases are the same, however the ore.groupApply result is an ore.frame. In this case the data stays in the database until it's actually required. This can result in significant memory and time savings whe data is large. R> class(res) [1] "ore.frame" attr(,"package") [1] "OREbase" R> head(res)    year cv.count 1 2000 0.3984848 2 2001 0.6062178 3 2002 0.2309401 4 2003 0.5773503 5 2004 0.3069680 6 2005 0.3431743 To make the ore.groupApply execute in parallel, you can specify the argument parallel with either TRUE, to use default database parallelism, or to a specific number, which serves as a hint to the database as to how many parallel R engines should be used. The next ddply example uses the summarise function, which creates a new data.frame. In ore.groupApply, the year column is passed in with the data. Since no automatic creation of columns takes place, we explicitly set the year column in the data.frame result to the value of the first row, since all rows received by the function have the same year. # Example 2 ddply(d, "year", summarise, mean.count = mean(count)) res <- ore.groupApply (D, D$year, function(x) {   mean.count <- mean(x$count)   data.frame(year=x$year[1], mean.count = mean.count)   }, FUN.VALUE=data.frame(year=1, mean.count=1)) R> head(res)    year mean.count 1 2000 7.666667 2 2001 13.333333 3 2002 15.000000 4 2003 3.000000 5 2004 12.333333 6 2005 14.666667 Example 3 uses the transform function with ddply, which modifies the existing data.frame. With ore.groupApply, we again construct the data.frame explicilty, which is returned as an ore.frame. # Example 3 ddply(d, "year", transform, total.count = sum(count)) res <- ore.groupApply (D, D$year, function(x) {   total.count <- sum(x$count)   data.frame(year=x$year[1], count=x$count, total.count = total.count)   }, FUN.VALUE=data.frame(year=1, count=1, total.count=1)) > head(res)    year count total.count 1 2000 5 23 2 2000 7 23 3 2000 11 23 4 2001 18 40 5 2001 4 40 6 2001 18 40 In Example 4, the mutate function with ddply enables you to define new columns that build on columns just defined. Since the construction of the data.frame using ore.groupApply is explicit, you always have complete control over when and how to use columns. # Example 4 ddply(d, "year", mutate, mu = mean(count), sigma = sd(count),       cv = sigma/mu) res <- ore.groupApply (D, D$year, function(x) {   mu <- mean(x$count)   sigma <- sd(x$count)   cv <- sigma/mu   data.frame(year=x$year[1], count=x$count, mu=mu, sigma=sigma, cv=cv)   }, FUN.VALUE=data.frame(year=1, count=1, mu=1,sigma=1,cv=1)) R> head(res)    year count mu sigma cv 1 2000 5 7.666667 3.055050 0.3984848 2 2000 7 7.666667 3.055050 0.3984848 3 2000 11 7.666667 3.055050 0.3984848 4 2001 18 13.333333 8.082904 0.6062178 5 2001 4 13.333333 8.082904 0.6062178 6 2001 18 13.333333 8.082904 0.6062178 In Example 5, ddply is used to partition data on multiple columns before constructing the result. Realizing this with ore.groupApply involves creating an index column out of the concatenation of the columns used for partitioning. This example also allows us to illustrate using the ORE transparency layer to subset the data. # Example 5 baseball.dat <- subset(baseball, year > 2000) # data from the plyr package x <- ddply(baseball.dat, c("year", "team"), summarize,            homeruns = sum(hr)) We first push the data set to the database to get an ore.frame. We then add the composite column and perform the subset, using the transparency layer. Since the results from database execution are unordered, we will explicitly sort these results and view the first 6 rows. BB.DAT <- ore.push(baseball) BB.DAT$index <- with(BB.DAT, paste(year, team, sep="+")) BB.DAT2 <- subset(BB.DAT, year > 2000) X <- ore.groupApply (BB.DAT2, BB.DAT2$index, function(x) {   data.frame(year=x$year[1], team=x$team[1], homeruns=sum(x$hr))   }, FUN.VALUE=data.frame(year=1, team="A", homeruns=1), parallel=FALSE) res <- ore.sort(X, by=c("year","team")) R> head(res)    year team homeruns 1 2001 ANA 4 2 2001 ARI 155 3 2001 ATL 63 4 2001 BAL 58 5 2001 BOS 77 6 2001 CHA 63 Our next example is derived from the ggplot function documentation. This illustrates the use of ddply within using the ggplot2 package. We first create a data.frame with demo data and use ddply to create some statistics for each group (gp). We then use ggplot to produce the graph. We can take this same code, push the data.frame df to the database and invoke this on the database server. The graph will be returned to the client window, as depicted below. # Example 6 with ggplot2 library(ggplot2) df <- data.frame(gp = factor(rep(letters[1:3], each = 10)),                  y = rnorm(30)) # Compute sample mean and standard deviation in each group library(plyr) ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y)) # Set up a skeleton ggplot object and add layers: ggplot() +   geom_point(data = df, aes(x = gp, y = y)) +   geom_point(data = ds, aes(x = gp, y = mean),              colour = 'red', size = 3) +   geom_errorbar(data = ds, aes(x = gp, y = mean,                                ymin = mean - sd, ymax = mean + sd),              colour = 'red', width = 0.4) DF <- ore.push(df) ore.tableApply(DF, function(df) {   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4) }) But let's take this one step further. Suppose we wanted to produce multiple graphs, partitioned on some index column. We replicate the data three times and add some noise to the y values, just to make the graphs a little different. We also create an index column to form our three partitions. Note that we've also specified that this should be executed in parallel, allowing Oracle Database to control and manage the server-side R engines. The result of ore.groupApply is an ore.list that contains the three graphs. Each graph can be viewed by printing the list element. df2 <- rbind(df,df,df) df2$y <- df2$y + rnorm(nrow(df2)) df2$index <- c(rep(1,300), rep(2,300), rep(3,300)) DF2 <- ore.push(df2) res <- ore.groupApply(DF2, DF2$index, function(df) {   df <- df[,1:2]   library(ggplot2)   library(plyr)   ds <- ddply(df, .(gp), summarise, mean = mean(y), sd = sd(y))   ggplot() +     geom_point(data = df, aes(x = gp, y = y)) +     geom_point(data = ds, aes(x = gp, y = mean),                colour = 'red', size = 3) +     geom_errorbar(data = ds, aes(x = gp, y = mean,                                  ymin = mean - sd, ymax = mean + sd),                   colour = 'red', width = 0.4)   }, parallel=TRUE) res[[1]] res[[2]] res[[3]] To recap, we've illustrated how various uses of ddply from the plyr package can be realized in ore.groupApply, which affords the user explicit control over the contents of the data.frame result in a straightforward manner. We've also highlighted how ddply can be used within an ore.groupApply call.

    Read the article

  • Why does my name resolution hit the DNS even with a hosts file entry?

    - by Volomike
    I'm running Ubuntu 10.04.2 LTS Desktop. Being a web developer, naturally I created a "me.com" in my /etc/hosts file. Unfortunately, my name resolution is going out to the DNS before first checking my local hosts entry and I can't figure out why. The end result is that if my /etc/resolv.conf contains "nameserver 127.0.0.1" in there first, then I get a response back in my web browser from me.com (local) within less than a second. But if I don't have that entry, then my response takes sometimes as much as 5 seconds if my ISP is a little slow. The problem was so troublesome that I actually had to file a question here (and someone resolved it) for how to automatically insert that entry into /etc/resolv.conf. But one of the users (@shellaholic) here highly recommended (and commented back and forth with me about it) that I should file this question. Do you know why my workstation's name resolution has to hit the DNS server first before hitting my /etc/hosts file entry? For now, I'm using the resolv.conf trick (see link above).

    Read the article

  • How to create water like in new super mario bros?

    - by user1103457
    I assume the water in New super mario bros works the same as in the first part of this tutorial: http://gamedev.tutsplus.com/tutorials/implementation/make-a-splash-with-2d-water-effects/ But in new super mario bros the water also has constant waves on the surface, and the splashes look very different. What's also a difference is that in the tutorial, if you create a splash, it first creates a deep "hole" in the water at the origin of the splash. In new super mario bros this hole is absent or much smaller. When I refer to the splashes in new super mario bros I am referring to the splashes that the player creates when jumping in and out of the water. For reference you could use this video: http://www.ign.com/videos/2012/11/17/new-super-mario-bros-u-3-star-coin-walkthrough-sparkling-waters-1-waterspout-beach just after 00:50, when the camera isn't moving you can get a good look at the water and the constant waves. there are also some good examples of the splashes during that time. How do they create the constant waves and the splashes? I am programming in XNA. (I have tried this myself but couldn't really get it all to work well together) (and as bonus questions: how do they create the light spots just under the surface of the waves, and how do they texture the deeper parts of the water? This is the first time I try to create water like this.)

    Read the article

  • Platform for Efficiency: Boeing Defense, Space & Security integrates supply chain processes using Oracle Business Process Management solutions. by Fred Sandsmark

    - by JuergenKress
    Like most companies, aerospace giant Boeing has its jargon - words and phrases that uniquely define its products and processes. Take the word platform. It is used at Boeing to mean a family of aircraft - the F/A-18 fighter, for example, or the 777 jetliner. Boeing Defense, Space & Security since August 2009, employees in the Global Services & Support (GS&S) division of Boeing Defense, Space & Security have been talking about a different sort of platform: a supply chain technology platform, based on Oracle Business Process Management (Oracle BPM) solutions and Oracle SOA Suite. That platform, built with the assistance of Oracle Diamond Partner Capgemini, is serving as a jumping-off point for Boeing's GS&S staff to deploy radically improved business processes supported by Oracle Fusion Applications to build a high-visibility, end-to-end supply chain. This business process-driven technology platform has ambitious goals: to help GS&S respond more quickly and accurately to its customers' needs, to make business processes at all GS&S sites more consistent and less expensive, and to create a foundation for further improvement and efficiency. Read the full article here. Want to publish your BPM11g success story - request for a partner/customer reference? BPM Center of Excellent & First 100 Days of BPM documents to our SOA Community Workspace MWD_bpm_si_Centre_of_Excellence_0811.pdf First 100 Days of BPM whitepaper.pdf Please visit our SOA Community Workspace (SOA Community membership required). SOA & BPM Partner Community For regular information on Oracle SOA Suite become a member in the SOA & BPM Partner Community for registration please visit  www.oracle.com/goto/emea/soa (OPN account required) If you need support with your account please contact the Oracle Partner Business Center. Blog Twitter LinkedIn Mix Forum Technorati Tags: BPM,BPM reference,BPM Capgemini,BPM first 100 days,BPM center of Excellence,SOA Community,Oracle SOA,Oracle BPM,Community,OPN,Jürgen Kress

    Read the article

  • VBO and shaders confusion, what's their connection?

    - by Jeffrey
    Considering OpenGL 2.1 VBOs and 1.20 GLSL shaders: When creating an entity like "Zombie", is it good to initialize just the VBO buffer with the data once and do N glDrawArrays() calls per each N zombies? Is there a more efficient way? (With a single call we cannot pass different uniforms to the shader to calculate an offset, see point 3) When dealing with logical object (player, tree, cube etc), should I always use the same shader or should I customize (or be able to customize) the shaders per each object? Considering an entity class, should I create and define the shader at object initialization? When having a movable object such as a human, is there any more powerful way to deal with its coordinates than to initialize its VBO object at 0,0 and define an uniform offset to pass to the shader to calculate its real position? Could you make an example of the Data Oriented Design on creating a generic zombie class? Is the following good? Zombielist class: class ZombieList { GLuint vbo; // generic zombie vertex model std::vector<color>; // object default color std::vector<texture>; // objects textures std::vector<vector3D>; // objects positions public: unsigned int create(); // return object id void move(unsigned int objId, vector3D offset); void rotate(unsigned int objId, float angle); void setColor(unsigned int objId, color c); void setPosition(unsigned int objId, color c); void setTexture(unsigned int, unsigned int); ... void update(Player*); // move towards player, attack if near } Example: Player p; Zombielist zl; unsigned int first = zl.create(); zl.setPosition(first, vector3D(50, 50)); zl.setTexture(first, texture("zombie1.png")); ... while (running) { // main loop ... zl.update(&p); zl.draw(); // draw every zombie }

    Read the article

  • Approaches to timed puzzle elements

    - by ndg
    I'm working on a side scrolling game that has a number of timed puzzle elements. As a simple example: I have a number of moving platforms that have been setup to transition in a pattern. Ideally I'd like to ensure that as the player first approaches them, they are in an ideal state -- whereby the player can witness the full transition and more experienced players (i.e: speedrunners) can complete the puzzle immediately without having to wait for the current transition to complete. The issue here, in a nutshell, is that because these platforms begin transitioning at the start of the level, it's impossible to correctly calculate when the player is likely to stumble upon them. I've done a fair bit of Googling but haven't managed to turn up any decent resources with regards to solving a problem like this. The obvious solution is to only begin updating the objects when the player (or more likely: the camera) first encounters them. But this becomes difficult when you consider more complicated situations. It seems like potentially the easiest way of handling this is to have an invisible trigger volume that will tell any puzzle elements located inside of it that the player has 'arrived' upon first colliding with the player. But this would mean I'd have to logically group puzzle elements, which could become fairly messy in a hurry. Take, for instance, a puzzle that appears to the right of the screen. It may take the player a number of seconds to reach it. It would look strange if the elements involved were to remain stationary. But by the time the player arrives, it's likely things will be 'out of sync'. I wanted to post here in the hopes that others know of, or have implemented, a decent solution to this problem?

    Read the article

  • A tale from a Stalker

    - by Peter Larsson
    Today I thought I should write something about a stalker I've got. Don't get me wrong, I have way more fans than stalkers, but this stalker is particular persistent towards me. It all started when I wrote about Relational Division with Sets late last year(http://weblogs.sqlteam.com/peterl/archive/2010/07/02/Proper-Relational-Division-With-Sets.aspx) and no matter what he tried, he didn't get a better performing query than me. But this I didn't click until later into this conversation. He must have saved himself for 9 months before posting to me again. Well... Some days ago I get an email from someone I thought i didn't know. Here is his first email Hi, I want a proper solution for achievement the result. The solution must be standard query, means no using as any native code like TOP clause, also the query should run in SQL Server 2000 (no CTE use). We have a table with consecutive keys (nbr) that is not exact sequence. We need bringing all values related with nearest key in the current key row. See the DDL: CREATE TABLE Nums(nbr INTEGER NOT NULL PRIMARY KEY, val INTEGER NOT NULL); INSERT INTO Nums(nbr, val) VALUES (1, 0),(5, 7),(9, 4); See the Result: pre_nbr     pre_val     nbr         val         nxt_nbr     nxt_val ----------- ----------- ----------- ----------- ----------- ----------- NULL        NULL        1           0           5           7 1           0           5           7           9           4 5           7           9           4           NULL        NULL The goal is suggesting most elegant solution. I would like see your best solution first, after that I will send my best (if not same with yours)   Notice there is no name, no please or nothing polite asking for my help. So, on the top of my head I sent him two solutions, following the rule "Work on SQL Server 2000 and only standard non-native code".     -- Peso 1 SELECT               pre_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.pre_nbr                              ) AS pre_val,                              d.nbr,                              d.val,                              d.nxt_nbr,                              (                                                           SELECT               x.val                                                           FROM                dbo.Nums AS x                                                           WHERE              x.nbr = d.nxt_nbr                              ) AS nxt_val FROM                (                                                           SELECT               (                                                                                                                     SELECT               MAX(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr < n.nbr                                                                                        ) AS pre_nbr,                                                                                        n.nbr,                                                                                        n.val,                                                                                        (                                                                                                                     SELECT               MIN(x.nbr) AS nbr                                                                                                                     FROM                dbo.Nums AS x                                                                                                                     WHERE              x.nbr > n.nbr                                                                                        ) AS nxt_nbr                                                           FROM                dbo.Nums AS n                              ) AS d -- Peso 2 CREATE TABLE #Temp                                                         (                                                                                        ID INT IDENTITY(1, 1) PRIMARY KEY,                                                                                        nbr INT,                                                                                        val INT                                                           )   INSERT                                            #Temp                                                           (                                                                                        nbr,                                                                                        val                                                           ) SELECT                                            nbr,                                                           val FROM                                             dbo.Nums ORDER BY         nbr   SELECT                                            pre.nbr AS pre_nbr,                                                           pre.val AS pre_val,                                                           t.nbr,                                                           t.val,                                                           nxt.nbr AS nxt_nbr,                                                           nxt.val AS nxt_val FROM                                             #Temp AS pre RIGHT JOIN      #Temp AS t ON t.ID = pre.ID + 1 LEFT JOIN         #Temp AS nxt ON nxt.ID = t.ID + 1   DROP TABLE    #Temp Notice there are no indexes on #Temp table yet. And here is where the conversation derailed. First I got this response back Now my solutions: --My 1st Slt SELECT T2.*, T1.*, T3.*   FROM Nums AS T1        LEFT JOIN Nums AS T2          ON T2.nbr = (SELECT MAX(nbr)                         FROM Nums                        WHERE nbr < T1.nbr)        LEFT JOIN Nums AS T3          ON T3.nbr = (SELECT MIN(nbr)                         FROM Nums                        WHERE nbr > T1.nbr); --My 2nd Slt SELECT MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END) AS pre_nbr,        (SELECT val FROM Nums WHERE nbr = MAX(CASE WHEN N1.nbr > N2.nbr THEN N2.nbr ELSE NULL END)) AS pre_val,        N1.nbr AS cur_nbr, N1.val AS cur_val,        MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END) AS nxt_nbr,        (SELECT val FROM Nums WHERE nbr = MIN(CASE WHEN N1.nbr < N2.nbr THEN N2.nbr ELSE NULL END)) AS nxt_val   FROM Nums AS N1,        Nums AS N2  GROUP BY N1.nbr, N1.val;   /* My 1st Slt Table 'Nums'. Scan count 7, logical reads 14 My 2nd Slt Table 'Nums'. Scan count 4, logical reads 23 Peso 1 Table 'Nums'. Scan count 9, logical reads 28 Peso 2 Table '#Temp'. Scan count 0, logical reads 7 Table 'Nums'. Scan count 1, logical reads 2 Table '#Temp'. Scan count 3, logical reads 16 */  To this, I emailed him back asking for a scalability test What if you try with a Nums table with 100,000 rows? His response to that started to get nasty.  I have to say Peso 2 is not acceptable. As I said before the solution must be standard, ORDER BY is not part of standard SELECT. Try this without ORDER BY:  Truncate Table Nums INSERT INTO Nums (nbr, val) VALUES (1, 0),(9,4), (5, 7)  So now we have new rules. No ORDER BY because it's not standard SQL! Of course I asked him  Why do you have that idea? ORDER BY is not standard? To this, his replies went stranger and stranger Standard Select = Set-based (no any cursor) It’s free to know, just refer to Advanced SQL Programming by Celko or mail to him if you accept comments from him. What the stalker probably doesn't know, is that I and Mr Celko occasionally are involved in some conversation and thus we exchange emails. I don't know if this reference to Mr Celko was made to intimidate me either. So I answered him, still polite, this What do you mean? The SELECT itself has a ”cursor under the hood”. Now the stalker gets rude  But however I mean the solution must no containing any order by, top... No problem, I do not like Peso 2, it’s very non-intelligent and elementary. Yes, Peso 2 is elementary but most performing queries are... And now is the time where I started to feel the stalker really wanted to achieve something else, so I wrote to him So what is your goal? Have a query that performs well, or a query that is super-portable? My Peso 2 outperforms any of your code with a factor of 100 when using more than 100,000 rows. While I awaited his answer, I posted him this query Ok, here is another one -- Peso 3 SELECT             MAX(CASE WHEN d = 1 THEN nbr ELSE NULL END) AS pre_nbr,                    MAX(CASE WHEN d = 1 THEN val ELSE NULL END) AS pre_val,                    MAX(CASE WHEN d = 0 THEN nbr ELSE NULL END) AS nbr,                    MAX(CASE WHEN d = 0 THEN val ELSE NULL END) AS val,                    MAX(CASE WHEN d = -1 THEN nbr ELSE NULL END) AS nxt_nbr,                    MAX(CASE WHEN d = -1 THEN val ELSE NULL END) AS nxt_val FROM               (                              SELECT    nbr,                                        val,                                        ROW_NUMBER() OVER (ORDER BY nbr) AS SeqID                              FROM      dbo.Nums                    ) AS s CROSS JOIN         (                              VALUES    (-1),                                        (0),                                        (1)                    ) AS x(d) GROUP BY           SeqID + x.d HAVING             COUNT(*) > 1 And here is the stats Table 'Nums'. Scan count 1, logical reads 2, physical reads 0, read-ahead reads 0, lob logical reads 0, lob physical reads 0, lob read-ahead reads 0. It beats the hell out of your queries…. Now I finally got a response from my stalker and now I also clicked who he was. This is his reponse Why you post my original method with a bit change under you name? I do not like it. See: http://www.sqlservercentral.com/Forums/Topic468501-362-14.aspx ;WITH C AS ( SELECT seq_nbr, k,        DENSE_RANK() OVER(ORDER BY seq_nbr ASC) + k AS grp_fct   FROM [Sample]         CROSS JOIN         (VALUES (-1), (0), (1)         ) AS D(k) ) SELECT MIN(seq_nbr) AS pre_value,        MAX(CASE WHEN k = 0 THEN seq_nbr END) AS current_value,        MAX(seq_nbr) AS next_value   FROM C GROUP BY grp_fct HAVING min(seq_nbr) < max(seq_nbr); These posts: Posted Tuesday, April 12, 2011 10:04 AM Posted Tuesday, April 12, 2011 1:22 PM Why post a solution where will not work in SQL Server 2000? Wait a minute! His own solution is using both a CTE and a ranking function so his query will not work on SQL Server 2000! Bummer... The reference to "Me not like" are my exact words in a previous topic on SQLTeam.com and when I remembered the phrasing, I also knew who he was. See this topic http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 where he writes a query and posts it under my name, as if I wrote it. So I answered him this (less polite). Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea? Besides, “M S solution” doesn’t work.   This is the result I get pre_value        current_value                             next_value 1                           1                           5 1                           5                           9 5                           9                           9   And I did nothing like you did here, where you posted a solution which you “thought” I should write http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=159262 So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? After a few hours I get this email back. I don't fully understand it, but it's probably a language barrier. >>Like I keep track of all topics in the whole world… J So you think you are the only one coming up with this idea?<< You right, but do not think you are the first creator of this.   >>Besides, “M S Solution” doesn’t work. This is the result I get <<   Why you get so unimportant mistake? See this post to correct it: Posted 4/12/2011 8:22:23 PM >> So why are you yourself using ranking function when this was not allowed per your original email, and no cte? You use CTE in your link above, which do not work in SQL Server 2000. <<  Again, why you get some unimportant incompatibility? You offer that solution for current goals not me  >> All this makes no sense to me, other than you are trying your best to once in a lifetime create a better performing query than me? <<  No, I only wanted to know who you will solve it. Now I know you do not have a special solution. No problem. No problem for me either. So I just answered him I am not the first, and you are not the first to come up with this idea. So what is your problem? I am pretty sure other people have come up with the same idea before us. I used this technique all the way back to 2007, see http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=93911 Let's see if he returns...  He did! >> So what is your problem? << Nothing Thanks for all replies; maybe we have some competitions in future, maybe. Also I like you but you do not attend it. Your behavior with me is not friendly. Not any meeting… Regards //Peso

    Read the article

  • Unity scaling instantiated GameObject at Start() doesn't "keep"

    - by Shivan Dragon
    I have a very simple scenario: A box-like Prefab which is imported from Blender automatically (I have the .blend file in the Assets folder). A script that has two public GameObject fields. In one I place the above prefab, and in the other I place a terrain object (which I've created in Unity's graphical view): public Collider terrain; public GameObject aStarCellHighlightPrefab; This script is attached to the camera. The idea is to have the Blender prefab instantiated, have the terrain set as its parent, and then scale said prefab instance up. I first did it like this, in the Start() method: void Start () { cursorPositionOnTerrain = new RaycastHit(); aStarCellHighlight = (GameObject)Instantiate(aStarCellHighlightPrefab, new Vector3(300,300,300), terrain.transform.rotation); aStarCellHighlight.name = "cellHighlight"; aStarCellHighlight.transform.parent = terrain.transform; aStarCellHighlight.transform.localScale = new Vector3(100,100,100); } and first thought it didn't work. However later I noticed that it did in fact work, in the sense where the scale was applied right at the start, but then right after the prefab instance came back to its initial scale. Putting the scale code in the Update() methods fixes it in the sense where now it stays scaled all the time: void Update () { aStarCellHighlight.transform.localScale = new Vector3(100,100,100); //... } However I've noticed that when I run this code, the object is first displayed without the scale being applied, and it takes about 5-10 seconds for the scale to happen. During this time everything works fine (like input and logging, etc). The scene is very simple, it's not like it has a lot of stuff to load or anything (there's a Ray cast from the camera on to the terrain, but that seems to happen without such delays). My (2 part) question is: Why doesn't it take the scale transform when I do it at the beginning in the Start() method. Why do I have to keep scaling it in the Update() method? Why does it take so long for the scale to "apply/show up".

    Read the article

< Previous Page | 111 112 113 114 115 116 117 118 119 120 121 122  | Next Page >