Search Results

Search found 59975 results on 2399 pages for 'data comparison'.

Page 886/2399 | < Previous Page | 882 883 884 885 886 887 888 889 890 891 892 893  | Next Page >

  • Using model tools as map editor

    - by cooky451
    I want to make a game which would require a 3D map editor. Of course, I would like to avoid creating such an editor. My idea is now to use modeling tools (3DS Max, Maya, Blender) to create the map, and to give game specific objects specified names. This way I'd just need to write an COLLADA - native map format converter. But I'm not sure if this is possible the way I imagine it, that's why I'd like to hear your thoughts on the matter. Are modeling tools suitable to create big open world maps? Can this "naming convention"-idea for game specific objects work? Are the modeling tools able to export a scene in chunks / in a way that occlusion culling and collision detection can be properly done? If not: Is there a way to build a suitable data structure from the exported data?

    Read the article

  • CodePlex Daily Summary for Tuesday, July 29, 2014

    CodePlex Daily Summary for Tuesday, July 29, 2014Popular ReleasesArtezio SharePoint 2013 Workflow Activities: SharePoint Designer 2013 custom workflow actions 1.0: SharePoint Designer 2013 custom workflow actions that work with permissions. Add Role Assignment Add Role Assignments Delete Role Assignments Get Role Definition Id Get Role Definition Id By Role Type Id Reset Role Inheritance Artezio SharePoint Consulting & DevelopmentVG-Ripper & PG-Ripper: PG-Ripper 1.4.32: changes NEW: Added Support for 'ImgMega.com' links NEW: Added Support for 'ImgCandy.net' links NEW: Added Support for 'ImgPit.com' links NEW: Added Support for 'Img.yt' links FIXED: 'Radikal.ru' links FIXED: 'ImageTeam.org' links FIXED: 'ImgSee.com' links FIXED: 'Img.yt' linksAsp.Net MVC-4,Entity Framework and JQGrid Demo with Todo List WebApplication: Asp.Net MVC-4,Entity Framework and JQGrid Demo: Asp.Net MVC-4,Entity Framework and JQGrid Demo with simple Todo List WebApplication, Overview TodoList is a simple web application to create, store and modify Todo tasks to be maintained by the users, which comprises of following fields to the user (Task Name, Task Description, Severity, Target Date, Task Status). TodoList web application is created using MVC - 4 architecture, code-first Entity Framework (ORM) and Jqgrid for displaying the data.Waterfox: Waterfox 31.0 Portable: New features in Waterfox 31.0: Added support for Unicode 7.0 Experimental support for WebCL New features in Firefox 31.0:New Add the search field to the new tab page Support of Prefer:Safe http header for parental control mozilla::pkix as default certificate verifier Block malware from downloaded files Block malware from downloaded files audio/video .ogg and .pdf files handled by Firefox if no application specified Changed Removal of the CAPS infrastructure for specifying site-sp...SuperSocket, an extensible socket server framework: SuperSocket 1.6.3: The changes below are included in this release: fixed an exception when collect a server's status but it has been stopped fixed a bug that can cause an exception in case of sending data when the connection dropped already fixed the log4net missing issue for a QuickStart project fixed a warning in a QuickStart projectYnote Classic: Ynote Classic 2.8.5 Beta: Several Changes - Multiple Carets and Multiple Selections - Improved Startup Time - Improved Syntax Highlighting - Search Improvements - Shell Command - Improved StabilityDatabase Schema Reader: v1.3.4.0: Contents DatabaseSchemaReader.dll - Class library (.net3.5) DatabaseSchemaViewer.exe - UI to read and view database schemas; options to generate SQL and code; option to compare another schema CopyToSQLite/CopyToSQLite.exe - UI to copy any database schema and data to SQLite or, if installed, SQL Server CE 4.0 net4/DatabaseSchemaReader.dll - .Net 4.0 class library This Release Schema Reading (MySQL)- reads UNSIGNED integers (available in the schema model's DatabaseColumn.DbDataType prope...TEBookConverter: 1.2: Fixed: Could not start convertion in some cases Fixed: Progress show during convertion was truncated Fixed: Stopping convertion didn't reset program titleQ2Cue \\: Q2Cue v. 0.85: Initial release.CS-Script Source: Release v3.8.4: CSScript.Evaluator is migrated to Mono v3.3.0 Added aggregating //css_ignore_ns from the imported scripts cs-script.7z - CS-Script Suite (binaries, documentation, samples) cs-script.ExtensionPack.7z - CS-Script Extension Pack (additional binaries and samples) cs-scriptDocs.7z - CS-Script DocumentationDotSpatial: DotSpatial 1.7: DotSpatial.Full - includes all DotSpatial libraries, extensions and DemoMap application DotSpatial.Core - includes only DotSpatial core libraries Entire list of changes see in the issue tracker. Main changes: Improved common stability, optimized memory and speed when loading and rendering shapefiles, fixed some memory leaks in rasters and shape layers. Simplified plugin infrastructure. Now there are predefined implementations for all required components (IStatusControl, IDockManager, IHead...NRepository: NRepository 2.2: NuGetNRepository packages are available from NuGet.org: NRepository.Core: Install-Package NRepository.Core NRepository.EntityFramework: Install-Package NRepository.EntityFramework NRepository.TestKit: Install-Package NRepository.TestKit Contoso University Sample Projects Added 3 versions of Contoso University sample code added: ContosoUniversityOriginal - This is the original Microsoft sample project for Entity Framework. ContosoUniversityUsingNRepository - This version simply repl...InstagramSaver: 1.5: Added: An option to wait between downloads to prevent ban from server Improved: File downloading performance Improved: File checking performancePVReportDeployer: PVReportDeployer 1.0: PVReportDeployer 1.0AD4 Application Designer for flow based .NET applications: AD4.AppDesigner.23.20: AD4.Iteration.23.20(Advanced Rendering Features) Option to render name of flow pins within flow chart (AlarmClockSample.10.ad4 used to test this extension): MainWindow extended by adding FlowChartShowFlowPinCaptionsComboBox ConfigureUIControls customized ManageFlowChartShowFlowPinCaptionsOptionFlow customized Parameter FlowChartFlowPins replaced FlowChartFlowPinCaptions in AD4.AppDesigner.cfg RenderFlowPinsCaptions customized to use the parameter ToDo: Some tutorials are unfinished b...CRM 2013 One Click Navigation: CRM2013OneClickNavigation_1_0_0_3_managed.zip: Fixes on Security PrivilegesAutomatedLab: AutomatedLab 2.1.0.0: 2.1.0 Support for external virtual switches CaRoot is a new role for installing Root Certificate Authorities Root domain controllers are installed in parallel now 2.0.0 Now supports also Windows Server 2008 R2 and 7 No longer the limit of just having one client and one server operating system Rewrite of the code that handles VHDs Lots of bug fixing Changed that parameter definition of some cmdlets in AutomatedLabDefinition and also adapted the sample scripts Now the Forest...BizTalk Deployment Utilities: 1.0.0.0: Draft VersionGroupMe Software Development Kit: GroupMe .NET SDK v1.0.1: Small update concerning the /users/me endpoint to get the currently authenticated user info. Documentation is available here: http://dotnetcorner.ch/Projects/GroupMe-NET-SDK/DocumentationOneNote Tagging Kit: OneNoteTaggingKit Add-In 2.4.5316.33833: New Features in this ReleaseThe Find Pages dialog preserves search string and tag filter selection on search scope changes Other ChangesDialog windows restored when action button is pressed while windows is minimized Code cleanup and simplification Code documentation Created some reusable UI controls Tags styling made more pleasant (I hope) Tag tooltips improved on the Find Pages dialog : Added dedicated tooltips to the page count and selection indicator of tags Closing open dial...New ProjectsCognitum ASP.NET Providers for Cassandra: Cassandra Providers is a ASP.NET solution that uses the Cassandra Database data source for a custom Membership, Role, Profile, an Session-State providers.DG Mobile: Simples aplicação para inserção, visualização e emissão de relatório de atividades relacionadas ao DG.Dynamics AX Development tools: Several AX Development related tools combined into one configurable packages: TFS Workspace, DEV_Tools, and X++ Editor components.E Drawing Library for WinRT: E Drawing Library for WinRT lets you create graphic rich Windows Store Apps easily, giving you all the power of Direct2D with simple classes and methods.EFramework.DataAccess: Summary ????freetalkserver: This php server, user can register!gicon: gicon is a project to help you to generate avatar/icon by hash (MD5, SHA1) or random hex string (such as GUID)hOOt - Full text search: Full text search engine built from scratch using bitmap indexesMelodyUI - Library for HTML, CSS & JS Development: MelodyUI is a set of classes and helpers that let us to create Web Applications. MelodyUI is lightweight and don't have any dependency.nRF24L01Plus .NETMF driver: nRF24L01, .NET MFNyan: Beginning...Prince Game Xna: Prince of Persia Net Game in Xna Game StudioSmartsby: Combining several small business web app's information into a single dashboard.Super snippets: Super snippets para Visual StudioTCP/IP Adapter BizTalk 2013: BizTalk 2013 TCP/IP community adapter. This is an updated version of the TCP/IP adapter for BizTalk, for use with BizTalk 2013.testMercurial: summaryUnit OF Work With Sql Helper: This project is intended to provide a simple data access object for SQL Server with repository and Unit of Work pattern.VisualRegEx: Prototype. Experimenting regex parsing, graph layout, etc.WDK Hardware Development Boards Add-On: Scripts for working with windows compatible hardware development boards like the Sharks Cove?????? 2014????????: ???????????????、????、????、??????、????、????????,????????、?????????,?????。?????? 2014????????: ????????????,????,??????????? ???? ???? ?????????,???,??,?????!???? 2014????????: ?????????????、????、????、??????、????、???????,?????,?????????!????? 2014???????: ????????????????,?????????????。????????????,???????,???????,?????,?????????? 2014????????: ?????????????????、????、??????、????????,????????????,???????????!??????? 2014???????: ??????????????????,??????????????、???????、???????、???????、?????!??????? 2014????????: ?????????????????????,????????????????,????????????????,??????!????? 2014????????: ?????????????????????????,???????????,????????,??????????????????????????? 2014????????: ??????????????????????,????“???????,???????”?????,????????????!?????? 2014???????: ?????????????????,????,????,??????,????“????、????、????、????”????????,??????.?????? 2014????????: ?????????????????"????,????"???,????????????????????????,??????????????。?????? 2014????????: ??????????????:????,????,????,???????,????????,??????:????????,?????!???? 2014????????: ????????????????????????,???????????????,????????????????!???? 2014?????????: ??????????????????,?????????????,???????????.????????????,????????????!?????? 2014??????: ??????????????:????,????,????,???????,????????,??????:????????,?????!?????? 2014??????: ?????????????????,???????????????。?????????????,???????,?????????。????? 2014???????: ????????????????"????,????"???,????????????????????????,??????????????。????? 2014??????: ???????????????????,?????????????,???????????.????????????,????????????!????? 2014???????: ?????????????????????????,???????????????,????????????????!?????? ????????: ??????????????300??,????????、???????、????、????????、?????,??????:????,????,???????!?????? ????????: ??????????????????,?????,???,???、???、?????,???,?????,??????????????????? ???????????: ??????????????????,?????????。????????????,??????,????,????,?????????!????? ???????: ????????????????????????,?????、???、????,????,???、???、???、???、???,????,?????!????? ???????: ????????????????????,????????,????????,????,????,??????,???????!????? ???????: ???????????????,??????????????????,????????,??????????????、????。?????? 2014????????: ????????????????????,????????:??、??、???,?????????????????????!?????? ????????: ????????????????6?,???????????????????????????,??????????????,?????????!?????? ??????: ????????????????8?,????????,????????,??????????,?????,????? ,????????!?????? ???????: ?????????????????????????????、????、????、???????????,????,????!????? 2014???????: ???????????、???、??、??????????????????????????????,????????????????!?????? 2014????????: ????????????????????????、??????,????、?????、????, ?????????,?????????????!?????? ????????: ?????????????????????、????、????、??????、???????,??????、??????。?????? ???????: ??????????????????,???????、????、????、??????、???????,??????,???????????。???? ????????: ??????,??????,?????????????????????,???????????????????????。???? ???????: ??????????????,???????、???????????,????????,????,?????????,??????,??????!?????? ????????: ?????????????????,?????????????。?????????,???????,???????,?????????????。?????? ????????: ????????????????、??????、??????、??????、??????、?????、??????、????、????、????????!????? ???????: ???????????????????,??????????????,???????????,??????,????,??????,??????。????? ???????: ?????????????!???????????,??????,????,?????????????,????????,??????,????,?????...?????? ???????: ????????????、??、???????????,??????,????????,??????????????????...?????????? ????????: ??????????????,??????????????,?????????,????,????,??????。???? ?????: ????????????、?????????,?????????,????,????????,????????????????!????? 2014??????????: ???????????????????,?????,???????,???????????,??????! ????? 2014?????????: ???????????????????,?????????????????????,?????,????,???????.

    Read the article

  • MVC Pattern, ViewModels, Location of conversion.

    - by Pino
    I've been working with ASP.Net MVC for around a year now and have created my applications in the following way. X.Web - MVC Application Contains Controller and Views X.Lib - Contains Data Access, Repositories and Services. This allows us to drop the .Lib into any application that requires it. At the moment we are using Entity Framework, the conversion from EntityO to a more specific model is done in the controller. This set-up means if a service method returns an EntityO and then the Controller will do a conversion before the data is passed to a view. I'm interested to know if I should move the conversion to the Service so that the app doesn't have Entity Objects being passed around.

    Read the article

  • Recover partition after GParted resizing interrupted by unexpected shutdown

    - by user84207
    As I was resizing my partitions using GParted, my laptop battery ran out, and the process was interrupted. Now, I am unable to mount the partition which I was trying to resize. I get the following error when I click to mount the partition on nautilus: Error mounting: mount: wrong fs type, bad option, bad superblock on /dev/sda3, missing codepage or helper program, or other error In some cases useful info is found in syslog - try dmesg | tail or so How can I go about recovering my data? Is there a safe way to attempt to force-mount the partition in question? Any help towards recovering my data is sincerely appreciated.

    Read the article

  • How do rsync from within a python script?

    - by Viswa
    I plan to move file from one system to another system. For this, I am using rsync command in linux terminal. It works fine. But I need to implement this command to python. I am very new in python, so I don't know the way of defining the rsync command. So please tell the steps to define it. This is my rsync command: rsync -avrz /opt/data/filename root@ip:/opt/data/file I need to implement this command in a python script.

    Read the article

  • Recap: Oracle at the Gartner Business Intelligence Summit

    - by kimberly.billings
    Getting to Vegas was no fun. As anyone who lives in the Bay Area knows, the SF airport shuts down one runway when it rains, causing major havoc. So rain, rain, rain on Sunday meant delay, delay, delay at the airport. Needless to say, my 6:30 pm flight didn't land in Vegas until 3:00 am! But the travel pains were worth it. There was a lot to be learned at the Gartner BI Summit this year, and the uptick in attendance was reflected in strong booth traffic and engaging conversations in the Oracle booth. Oracle customer, Dawn Conant, Director, Business Intelligence at Beckman Coulter, generated a lot of interest in her presentation about migrating from Business Objects to Oracle Business Intelligence, Enterprise Edition with Oracle Database 11g. Dawn's story was a very relatable one, as many of the attendees had plans for similar projects. One of the most interesting Gartner-led sessions compared BI/DW megavendors, IBM, Oracle, SAP and Microsoft. According to Gartner analyst Rita Sallam, these megavendors control about two-thirds of the BI market. Sallem attributes this in part to the fact that organizations are expanding their definitions of BI to also include analytics and performance management. In doing so, they require greater integration of BI applications with a broader set of applications and middleware. In a related session, a panel of Gartner analysts compared the Magic Quadrants for BI Platforms; CPM; Data Quality; Data Integration Tools; and Data Warehouses. Oracle is a leader in all of the Magic Quadrants in which it participates and has the most complete stack including hardware and software, according to Donald Feinberg. Feinberg also commented that in situations with VLDW and solid mixed workloads, Oracle Exadata is making a big difference! var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • vb.net and mysql connectivity [closed]

    - by kalpana
    I have used adodb using odbc database connectivity for connecting vb.net to mysql. I have fetched table values into recordset. I want to fetch only one column values (for example, table name-login, column name-password and values in password column are "manage","sales","general"). I want to fetch these values in text boxes. I have written code but it's not working. Dim conn As New ADODB.Connection Dim res As New ADODB.Recordset conn.Open("test", "root", "root") res = conn.Execute("select password from login") textbox1.text=res(0).value textbox2.text=res(1).value textbox3.text=res(2).value I am getting data in textbox1 but other data is not getting inserted into textbox2 and textbox3..I am getting error i.e (1) Item cannot be found in the collection corresponding to the requested name or ordinal.

    Read the article

  • VSDB to SSDT Part 1 : Converting projects and trimming excess files

    - by Etienne Giust
    Visual Studio 2012 introduces a change regarding Database Projects : they now use the SSDT technology, which means old VS2010 database projects (VSDB projects) need to be converted. Hopefully, VS2012 does that for you and it is quite painless, but in my case some unnecessary artifacts from the old project were left in place.  Also, when reopening the solution, database projects appeared unconverted even if I had converted them in the previous session and saved the solution.   Converting the project(s) When opening your Visual Studio 2010 solution with Visual Studio 2012, every standard project should be converted by default, but Visual Studio will ask you about your database projects : “Functional changes required Visual Studio will automatically make functional changes to the following projects in order to open them. The project behavior will change as a result. You will be able to open these projects in this version and Visual Studio 2010 SP1.” If you accept, your project is converted. And it should compile with no errors right away except if you have dependencies to dbschema files which are no longer supported.   The output of a SSDT project is a dacpac file which replaces the dbschema file you were accustomed to. References to dacpac files can be added to SSDT projects in the same fashion references to dbschema could be added to VSDB projects.   Cleaning up You will notice that your project file is now a sqlproj file but the old dbproj is still here. In fact at that point you can still reopen the solution in Visual Studio 2010 and everything should show up.   If like me you plan on using VS2012 exclusively, you can get rid of the following files which are still on your disk and in your source control : the dbproj and dbproj.vspscc files Properties/Database.sqlcmdvars Properties/Database.sqldeployment Properties/Database.sqlpermissions Properties/Database.sqlsettings   You might wonder where the information which used to be in the Properties files is now stored. Permissions : a Permissions.sql was created at the root level of your project. Note that when you create a new database project and import a database using the Schema Compare capabilities from Visual Studio, imported table and stored procedure definition files will hold the permission information (along with constraints and, indexes) SQLVars : they are defined inside the publish.xml files Deployment : they are also in the publish.xml files Settings : I was unable to find where those are now. I suppose they are not defined anymore   But Visual Studio still says my database projects should be converted ! I had this error upon closing and then re-opening the solution : my database projects would appear unconverted even though I did all the necessary steps previously.   Easy solution : remove those projects from the solution and add them again (the sqlproj files).   More For those who run into problems when converting from VSDB to SSDT, I suggest reading the following post : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/top-vsdb-gt-ssdt-project-conversion-issues.aspx   Also interesting, is a side by side comparison of VSDB and SSDT project features : http://blogs.msdn.com/b/ssdt/archive/2011/11/21/sql-server-data-tools-ctp4-vs-vs2010-database-projects.aspx

    Read the article

  • Thinktecture.IdentityServer RC

    - by Your DisplayName here!
    I just uploaded the RC of IdentityServer to Codeplex. This release is feature complete and if I don’t get any bug reports this is also pretty much the final V1. Changes from B1 The configuration data access is now based on EF 4.1 code first. This makes it much easier to use different data stores. For RTM I will also provide a SQL script for SQL Server so you can move the configuration to a separate machine (e.g. for load balancing scenarios). I included the ASP.NET Universal Providers in the download. This adds official support for SQL Azure, SQL Server and SQL Compact for the membership, roles and profile features. Unfortunately the Universal Provider use a different schema than the original ASP.NET providers (that sucks btw!) – so I made them optional. If you want to use them go to web.config and uncomment the new provider. The relying party registration entries now have added fields to add extra data that you want to couple with the RP. One use case could be to give the UI a hint how the login experience should look like per RP. This allows to have a different look and feel for different relying parties. I also included a small helper API that you can use to retrieve the RP record based on the incoming WS-Federation query string. WS-Federation single sign out is now conforming to the spec. Certificate based endpoint identities for SSL endpoints are optional now. Added a initial configuration “wizard”. This sets up the signing certificate, issuer URI and site title on the first run. Installation This is still a “developer” release – that means it ships with source code that you have to build it etc. But from that point it should be a little more straightforward as it used to be: Make sure SSL is configured correctly for IIS Map the WebSite directory to a vdir in IIS Run the web site. This should bring up the initial configuration Make sure the worker process account has access to the signing certificate private key Make sure all your users are in the “IdentityServerUsers” role in your role store. Administrators need the “IdentityServerAdministrators” role That should be it. A proper documentation will be hopefully available soon (any volunteers?). Please provide feedback! thanks!

    Read the article

  • Linux DD command partition -to- partition

    - by Ben Jackson
    I just used the DD command to copy the contents of one partition over to another partition on another drive, like this: dd if=/dev/sda2 of=/dev/sdb2 bs=4096 conv=noerror sda2 partition was 66GB and sdb2 was 250GB. I read that by doing this the extra space on the drive I am copying to will be wasted, is this true? I wasn't worried about loosing the extra space for the time being however, I just ran: sudo kill -USR1 (PID) to view the current status of DD and it has written over 66GB of data, will it continue to write data until it gets to 250GB? If so, is there a way to stop the process without corrupting it as waiting for it to write blank space seems like a waste of time.

    Read the article

  • Table Variables: an empirical approach.

    - by Phil Factor
    It isn’t entirely a pleasant experience to publish an article only to have it described on Twitter as ‘Horrible’, and to have it criticized on the MVP forum. When this happened to me in the aftermath of publishing my article on Temporary tables recently, I was taken aback, because these critics were experts whose views I respect. What was my crime? It was, I think, to suggest that, despite the obvious quirks, it was best to use Table Variables as a first choice, and to use local Temporary Tables if you hit problems due to these quirks, or if you were doing complex joins using a large number of rows. What are these quirks? Well, table variables have advantages if they are used sensibly, but this requires some awareness by the developer about the potential hazards and how to avoid them. You can be hit by a badly-performing join involving a table variable. Table Variables are a compromise, and this compromise doesn’t always work out well. Explicit indexes aren’t allowed on Table Variables, so one cannot use covering indexes or non-unique indexes. The query optimizer has to make assumptions about the data rather than using column distribution statistics when a table variable is involved in a join, because there aren’t any column-based distribution statistics on a table variable. It assumes a reasonably even distribution of data, and is likely to have little idea of the number of rows in the table variables that are involved in queries. However complex the heuristics that are used might be in determining the best way of executing a SQL query, and they most certainly are, the Query Optimizer is likely to fail occasionally with table variables, under certain circumstances, and produce a Query Execution Plan that is frightful. The experienced developer or DBA will be on the lookout for this sort of problem. In this blog, I’ll be expanding on some of the tests I used when writing my article to illustrate the quirks, and include a subsequent example supplied by Kevin Boles. A simplified example. We’ll start out by illustrating a simple example that shows some of these characteristics. We’ll create two tables filled with random numbers and then see how many matches we get between the two tables. We’ll forget indexes altogether for this example, and use heaps. We’ll try the same Join with two table variables, two table variables with OPTION (RECOMPILE) in the JOIN clause, and with two temporary tables. It is all a bit jerky because of the granularity of the timing that isn’t actually happening at the millisecond level (I used DATETIME). However, you’ll see that the table variable is outperforming the local temporary table up to 10,000 rows. Actually, even without a use of the OPTION (RECOMPILE) hint, it is doing well. What happens when your table size increases? The table variable is, from around 30,000 rows, locked into a very bad execution plan unless you use OPTION (RECOMPILE) to provide the Query Analyser with a decent estimation of the size of the table. However, if it has the OPTION (RECOMPILE), then it is smokin’. Well, up to 120,000 rows, at least. It is performing better than a Temporary table, and in a good linear fashion. What about mixed table joins, where you are joining a temporary table to a table variable? You’d probably expect that the query analyzer would throw up its hands and produce a bad execution plan as if it were a table variable. After all, it knows nothing about the statistics in one of the tables so how could it do any better? Well, it behaves as if it were doing a recompile. And an explicit recompile adds no value at all. (we just go up to 45000 rows since we know the bigger picture now)   Now, if you were new to this, you might be tempted to start drawing conclusions. Beware! We’re dealing with a very complex beast: the Query Optimizer. It can come up with surprises What if we change the query very slightly to insert the results into a Table Variable? We change nothing else and just measure the execution time of the statement as before. Suddenly, the table variable isn’t looking so much better, even taking into account the time involved in doing the table insert. OK, if you haven’t used OPTION (RECOMPILE) then you’re toast. Otherwise, there isn’t much in it between the Table variable and the temporary table. The table variable is faster up to 8000 rows and then not much in it up to 100,000 rows. Past the 8000 row mark, we’ve lost the advantage of the table variable’s speed. Any general rule you may be formulating has just gone for a walk. What we can conclude from this experiment is that if you join two table variables, and can’t use constraints, you’re going to need that Option (RECOMPILE) hint. Count Dracula and the Horror Join. These tables of integers provide a rather unreal example, so let’s try a rather different example, and get stuck into some implicit indexing, by using constraints. What unusual words are contained in the book ‘Dracula’ by Bram Stoker? Here we get a table of all the common words in the English language (60,387 of them) and put them in a table. We put them in a Table Variable with the word as a primary key, a Table Variable Heap and a Table Variable with a primary key. We then take all the distinct words used in the book ‘Dracula’ (7,558 of them). We then create a table variable and insert into it all those uncommon words that are in ‘Dracula’. i.e. all the words in Dracula that aren’t matched in the list of common words. To do this we use a left outer join, where the right-hand value is null. The results show a huge variation, between the sublime and the gorblimey. If both tables contain a Primary Key on the columns we join on, and both are Table Variables, it took 33 Ms. If one table contains a Primary Key, and the other is a heap, and both are Table Variables, it took 46 Ms. If both Table Variables use a unique constraint, then the query takes 36 Ms. If neither table contains a Primary Key and both are Table Variables, it took 116383 Ms. Yes, nearly two minutes!! If both tables contain a Primary Key, one is a Table Variables and the other is a temporary table, it took 113 Ms. If one table contains a Primary Key, and both are Temporary Tables, it took 56 Ms.If both tables are temporary tables and both have primary keys, it took 46 Ms. Here we see table variables which are joined on their primary key again enjoying a  slight performance advantage over temporary tables. Where both tables are table variables and both are heaps, the query suddenly takes nearly two minutes! So what if you have two heaps and you use option Recompile? If you take the rogue query and add the hint, then suddenly, the query drops its time down to 76 Ms. If you add unique indexes, then you've done even better, down to half that time. Here are the text execution plans.So where have we got to? Without drilling down into the minutiae of the execution plans we can begin to create a hypothesis. If you are using table variables, and your tables are relatively small, they are faster than temporary tables, but as the number of rows increases you need to do one of two things: either you need to have a primary key on the column you are using to join on, or else you need to use option (RECOMPILE) If you try to execute a query that is a join, and both tables are table variable heaps, you are asking for trouble, well- slow queries, unless you give the table hint once the number of rows has risen past a point (30,000 in our first example, but this varies considerably according to context). Kevin’s Skew In describing the table-size, I used the term ‘relatively small’. Kevin Boles produced an interesting case where a single-row table variable produces a very poor execution plan when joined to a very, very skewed table. In the original, pasted into my article as a comment, a column consisted of 100000 rows in which the key column was one number (1) . To this was added eight rows with sequential numbers up to 9. When this was joined to a single-tow Table Variable with a key of 2 it produced a bad plan. This problem is unlikely to occur in real usage, and the Query Optimiser team probably never set up a test for it. Actually, the skew can be slightly less extreme than Kevin made it. The following test showed that once the table had 54 sequential rows in the table, then it adopted exactly the same execution plan as for the temporary table and then all was well. Undeniably, real data does occasionally cause problems to the performance of joins in Table Variables due to the extreme skew of the distribution. We've all experienced Perfectly Poisonous Table Variables in real live data. As in Kevin’s example, indexes merely make matters worse, and the OPTION (RECOMPILE) trick does nothing to help. In this case, there is no option but to use a temporary table. However, one has to note that once the slight de-skew had taken place, then the plans were identical across a huge range. Conclusions Where you need to hold intermediate results as part of a process, Table Variables offer a good alternative to temporary tables when used wisely. They can perform faster than a temporary table when the number of rows is not great. For some processing with huge tables, they can perform well when only a clustered index is required, and when the nature of the processing makes an index seek very effective. Table Variables are scoped to the batch or procedure and are unlikely to hang about in the TempDB when they are no longer required. They require no explicit cleanup. Where the number of rows in the table is moderate, you can even use them in joins as ‘Heaps’, unindexed. Beware, however, since, as the number of rows increase, joins on Table Variable heaps can easily become saddled by very poor execution plans, and this must be cured either by adding constraints (UNIQUE or PRIMARY KEY) or by adding the OPTION (RECOMPILE) hint if this is impossible. Occasionally, the way that the data is distributed prevents the efficient use of Table Variables, and this will require using a temporary table instead. Tables Variables require some awareness by the developer about the potential hazards and how to avoid them. If you are not prepared to do any performance monitoring of your code or fine-tuning, and just want to pummel out stuff that ‘just runs’ without considering namby-pamby stuff such as indexes, then stick to Temporary tables. If you are likely to slosh about large numbers of rows in temporary tables without considering the niceties of processing just what is required and no more, then temporary tables provide a safer and less fragile means-to-an-end for you.

    Read the article

  • Subscribable World Cup 2010 Calendar

    - by jamiet
    I bang on quite a lot on this blog about ways in which data can get published over the web and one of the most interesting ways, in my opinion, of publishing data in a structured manner that is well understood is to use the iCalendar specification. There isn’t much information in the world that doesn’t have some concept of “when” so iCalendar is a great way of distributing that information. You have probably used iCalendar at some point without even knowing about it. All files with a .ics suffix are iCalendar format files and that is why you can happily import them into Outlook, Hotmail Calendar, Google Calendar etc… where they can be parsed and have the semantic data (when, where and who) extracted from them. Importing of iCalendar format data is really only half the trick though; in my opinion the real value of iCalendar-formatted calendar is the ability to subscribe to them. Subscribing has a simple benefit over importing but that single benefit is of massive importance: a subscriber to an iCalendar calendar can periodically check to see if any updates have been made and, if they have, automatically update the local copy. The real benefit to the user is the productivity gain – a single update to an iCalendar means that all subscribers are automatically made aware of the change and there is zero effort on the part of the subscriber; as my former colleague Howard van Rooijen is fond of saying, “work smarter not harder” – nowhere is this edict more ably demonstrated than subscribing versus importing of calendars. If you want to read some more thoughts about iCalendar then go and read my past blog post Calendar syndication - My big hope for 2009's breakthrough technology or better still go and seek out Jon Udell who speaks very authoritatively on the issue of iCalendar. With this subject of iCalendar on my mind I was interested to discover (via Steve Clayton’s blog post Download the world cup fixtures) that the BBC had made a .ics file available containing all of the matches in the upcoming World Cup. As you can probably guess this was a file that was made available so that it could be imported into your calendar of choice. It had one obvious downside though, right now nobody knows who is going to be playing in the knock-out stages so the calendar looks like this: with no teams being named after 25th June. How much more useful would this calendar have been if the BBC had made it possible to subscribe to the calendar instead, thus the calendar could be updated with the teams for the knock out stages when they are known and every subscriber would have a permanently up-to-date record of all the fixtures in their calendar. Better still, the calendar could be updated with match results as well or perhaps even post a match report from the BBC sport pages; when calendars are made subscribable a sea of opportunity opens up for distribution of information. So with that in mind I have decided to go one better than the BBC. I have imported their .ics into a brand new Hotmail calendar and made it publicly available at the following URLs: HTML http://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/index.html iCalendar webcal://cid-dc1ed121af0476be.calendar.live.com/calendar/World+Cup+2010/calendar.ics The link you’re really interested in is the second one - click on that and it should open up in your calendar software of choice. Or, if you want to view it in an online calendar such as Hotmail Calendar or Google Calendar, copy and paste that URL into the appropriate place. Some people have told me they’re having trouble with the iCalendar link in which case hit the HTML link and then click “View ICS” at the resultant web page: I shall endeavour to keep the calendar updated throughout the World Cup and even if I don’t you’re no worse off than if you had imported the BBC’s .ics file so why not give it a try? If I do keep it up to date then you will have a permanent record of the 2010 World Cup available in your calendar. Forever. If you have your calendar synced to your smartphone then you’ll be carrying match reports around with you without you having to do a single thing. Surely that’s worth a quick click isn’t it?   If you have any thoughts let me have them in the comments below. Thanks for reading. @Jamiet Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • Am I experienced enough to learn and develop immediately using Ruby on Rails?

    - by acheong87
    General Question I understand that discussions revolving around questions of this form run the risk of becoming too specific to help others. So, perhaps a better, general question would be: What kind of experience, if any, translates easily to Ruby on Rails; and if none, then what's the learning curve like, in comparison to other popular languages? Background I have the opportunity to build a website using whatever technologies I wish to use. It's a fairly simple website, for listing products, taking payments, managing customer data, providing a back-end portal for employees to manage data, possibly hooking in flight information (the products are travel related), possibly integrating a blog and all the social-networking goodies. Specific Problem I have to let the client know by tonight whether I'm interested in taking up this project, before he talks to other potential developers, but I'm on the fence. I already work a full-time C++ development job, so the money doesn't do it for me. It's the opportunity to (be paid to) learn some new technologies and to have a real, running product in the end. I've heard and read great things about Ruby, and am really intrigued. I zipped through some introductory Ruby tutorials, no sweat. However I found the Rails tutorials a little overwhelming, especially not being able to try it out anywhere. And researching Rails hosts like Heroku and EngineYard makes me think that maybe I don't know what I'm getting myself into. The ship's leaving port! I wish I had more time to learn, better yet play with the language, but I have to decide soon! Should I venture or pass? Additional Details My experiences are in C/C++/Tcl/Perl/PHP/jQuery, and basic knowledge of Java/C#. I didn't study C.S. formally so I wasn't exposed to design principles, programming paradigms, etc., which is my greatest concern. Will my lack of understanding in this realm make RoR frustrating to learn? Will it be so incompatible with a C++ "way" of thinking that I'll wish I never started? Am I putting my client at risk by attempting this? If it helps, I'm quick to learn new things (self-taught so far) and care a great deal about correctness, using things for their intended purposes, and so on. I've read numerous recommendations of Agile Development with Rails and would love to read it (though perhaps, while developing in parallel, for shortness of time). Worse comes to worst, I'd give up and do the standard LAMP gig, of course, not charging the client for wasted time. But I'm hoping to avoid the project altogether if it's gonna come down to that! Thanks in advance for any tips, insights, votes of confidence, votes of discouragement (for the better), and such.

    Read the article

  • Architecture for interfacing multiple applications

    - by Erwin
    Let's say you have a Master Database and a few External/Internal applications that use WebServices to interface data. What would be your preferred architecture to interface data from and to those applications? Would you put some sort of Enterprise Service Bus in between? Like BizTalk? Or something cheaper? We don't want to block applications while they are interfacing, but we do want to use return codes from the interfaces to determine if we need to take some actions in the originating application or not.

    Read the article

  • Send arbitrary commands to modem for info

    - by mydoghasworms
    I have a USB 3G data card and modem. According to the supplier, to check the data balance on the account, I must dial a number like *89# (or whatever) from the handset. The device appears as a ppp0 interface in my network devices. How does one go about issuing commands to the modem to send the commands and receive back the information? UPDATE: OK, I found some help here: http://www.linuxquestions.org/questions/linux-hardware-18/sending-at-commands-to-a-sms-modem-372445/, but firstly, there are 3 ttyUSB devices on my system, and I assume that one of them is the 3G modem, but how can I tell? Secondly, trying to echo something to two of them says that the device is busy, which stands to reason, because I am using it to connect to the internet. Some help, please?

    Read the article

  • YouSendIt Alternative?

    - by WuckaChucka
    Looking for a reasonably priced alternative to YouSendIt's exorbitant pricing for an embedded, unbranded (i.e. no "Uploads by SomeCompany" or at the very least, discrete, subtle co-branding) file upload solution for my client's print shop Website. To do what we want to do with YouSendIt, we're looking at a corporate account of $995 USD plus $29.99 USD monthly fee, that is only sold pro-rated, so you have to buy the entire year's worth. To me, this is just unacceptable considering the commodity pricing of storage and bandwidth nowadays. For data, we're looking at roughly 10MB per upload, with perhaps 250-1000 uploads per month, with transient data storage of no more than 30 days (and more than likely 1-2 business days) for a total of 10 GB transfer (upload) and 10 GB transfer (download, to the print shop) at the very max each month. Any ideas? Everything I've found through searching seems to be geared more towards personal file sharing and not for embedding into Websites. Thanks

    Read the article

  • WPF more dynamic views and DataAnnotations

    - by Ingó Vals
    Comparing WPF and Asp.Net Razor/HtmlHelper I find WPF/Xaml to be somewhat lacking in creating views. With HtmlHelpers you could define in one place how you wan't to represent specific type of data and include elements set from the DataAnnotations of the property. In WPF you can also define DataTemplates for data but it seems much more limited then EditorTemplates. It doesn't use information from DataAnnotations. Also the layout of elements can be bothersome. I hate having to constantly add RowDefinitions and update the Grid.Row attribute of lot of elements when I add a new property somewhere in line. I understand that GUI programming can be a lot of grunt work like this but as Asp.Net MVC has shown there are ways around that. What solutions are out there to make view creation in WPF a little bit cleaner, maintainable and more dynamic?

    Read the article

  • How can I restore a corrupted Shotwell db?

    - by Yann Dìnendal
    While I was importing photos from a SD card in Shotwell, my laptop has crashed (shutdown because of over-heating). Now, I can no longer launch Shotwell: $ shotwell ** ERROR:x86_64-linux-gnu/db/VersionTable.c:104:version_table_construct: assertion failed: (res == SQLITE_OK) If I move the .db file ~/.shotwell/data/photo.db elsewhere so it is no longer present (but backed-up), shotwell will launch, but will rebuild its library, which will be empty. I can re-import the folders shotwell creates, but I lose nearly all meta-data (most importantly the "folders", or events, and their main picture) Is there a way to repair/rebuild/restore photo.db? How can I see what's inside? how bad it's damaged? Or is there another way to get back my events and imports?

    Read the article

  • Is this proper OO design for C++?

    - by user121917
    I recently took a software processes course and this is my first time attempting OO design on my own. I am trying to follow OO design principles and C++ conventions. I attempted and gave up on MVC for this application, but I am trying to "decouple" my classes such that they can be easily unit-tested and so that I can easily change the GUI library used and/or the target OS. At this time, I have finished designing classes but have not yet started implementing methods. The function of the software is to log all packets sent and received, and display them on the screen (like WireShark, but for one local process only). The software accomplishes this by hooking the send() and recv() functions in winsock32.dll, or some other pair of analogous functions depending on what the intended Target is. The hooks add packets to SendPacketList/RecvPacketList. The GuiLogic class starts a thread which checks for new packets. When new packets are found, it utilizes the PacketFilter class to determine the formatting for the new packet, and then sends it to MainWindow, a native win32 window (with intent to later port to Qt).1 Full size image of UML class diagram Here are my classes in skeleton/header form (this is my actual code): class PacketModel { protected: std::vector<byte> data; int id; public: PacketModel(); PacketModel(byte* data, unsigned int size); PacketModel(int id, byte* data, unsigned int size); int GetLen(); bool IsValid(); //len >= sizeof(opcode_t) opcode_t GetOpcode(); byte* GetData(); //returns &(data[0]) bool GetData(byte* outdata, int maxlen); void SetData(byte* pdata, int len); int GetId(); void SetId(int id); bool ParseData(char* instr); bool StringRepr(char* outstr); byte& operator[] (const int index); }; class SendPacket : public PacketModel { protected: byte* returnAddy; public: byte* GetReturnAddy(); void SetReturnAddy(byte* addy); }; class RecvPacket : public PacketModel { protected: byte* callAddy; public: byte* GetCallAddy(); void SetCallAddy(byte* addy); }; //problem: packets may be added to list at any time by any number of threads //solution: critical section associated with each packet list class Synch { public: void Enter(); void Leave(); }; template<class PacketType> class PacketList { private: static const int MAX_STORED_PACKETS = 1000; public: static const int DEFAULT_SHOWN_PACKETS = 100; private: vector<PacketType> list; Synch synch; //wrapper for critical section public: void AddPacket(PacketType* packet); PacketType* GetPacket(int id); int TotalPackets(); }; class SendPacketList : PacketList<SendPacket> { }; class RecvPacketList : PacketList<RecvPacket> { }; class Target //one socket { bool Send(SendPacket* packet); bool Inject(RecvPacket* packet); bool InitSendHook(SendPacketList* sendList); bool InitRecvHook(RecvPacketList* recvList); }; class FilterModel { private: opcode_t opcode; int colorID; bool bFilter; char name[41]; }; class FilterFile { private: FilterModel filter; public: void Save(); void Load(); FilterModel* GetFilter(opcode_t opcode); }; class PacketFilter { private: FilterFile filters; public: bool IsFiltered(opcode_t opcode); bool GetName(opcode_t opcode, char* namestr); //return false if name does not exist COLORREF GetColor(opcode_t opcode); //return default color if no custom color }; class GuiLogic { private: SendPacketList sendList; RecvPacketList recvList; PacketFilter packetFilter; void GetPacketRepr(PacketModel* packet); void ReadNew(); void AddToWindow(); public: void Refresh(); //called from thread void GetPacketInfo(int id); //called from MainWindow }; I'm looking for a general review of my OO design, use of UML, and use of C++ features. I especially just want to know if I'm doing anything considerably wrong. From what I've read, design review is on-topic for this site (and off-topic for the Code Review site). Any sort of feedback is greatly appreciated. Thanks for reading this.

    Read the article

  • Breadcrumb using and schema.org rich snippets

    - by Adam Jenkin
    I am having problems implementing the breadcrumb rich snippets from schema.org. When I construct my breadcrumb using the documentation and run via Google Rich Snippet testing tool, the breadcrumb is identified but not shown in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body itemscope itemtype="http://schema.org/WebPage"> <strong>You are here: </strong> <div itemprop="breadcrumb"> <a title="Home" href="/">Home</a> > <a title="Test Pages" href="/Test-Pages/">Test Pages</a> > </div> </body> </html> If I change to use the snippets from data-vocabulary.org, the rich snippets show correctly in the preview. <!DOCTYPE html> <html> <head> <title>My Test Page</title> </head> <body> <strong>You are here: </strong> <ol itemprop="breadcrumb"> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/" itemprop="url"> <span itemprop="title">Home</span> </a> </li> <li itemscope itemtype="http://data-vocabulary.org/Breadcrumb"> <a href="/Test-Pages/" itemprop="url"> <span itemprop="title">Test Pages</span> </a> </li> </ol> </body> </html> I want the breadcrumb to be shown in the search result rather than the url to the page. Given that schema.org is the recommended way to be using rich snippets, I would rather use this, however as the breadcrumb is not showing in the preview of the search result using this method, i'm not convinced this is working correctly. Am I doing something wrong in the markup for schema.org example?

    Read the article

  • Help for choosing a cost effective game server for Flash client

    - by Sapots Thomas
    I am developing a flash-based game primarily for desktops, to be hosted on facebook platform (like cityville, sims social etc). The gameplay doesn't involve real-time communication between players unlike an mmorpg. Here each player plays in his own world without any knowledge of other online players. I've written almost 95% of the game logic in actionscript on the client side. I used Smartfox Server pro on the server side (mostly used for getting data from the DB) and the entire server code is an extension written in java. I'm using json as the protocol for communication. Although I love smartfox server, as an indie, its tough for me to afford the unlimited users license. Morever its limited just to one machine. So I'm looking for an alternative to smartfox server now. The reason for choosing smartfox server earlier was to use the server properties supported by it. Server properties on smartfox server take advantage of the socket connection and are essentially server side objects in java which store some data for the player which he can change frequently during the game. And when he logs out of the game, the extension can write out the final state in the DB (I'm using MySQL). This significantly reduces the number of DB UPDATE/INSERT calls made during the game. I love the way this works since the data is secure as its on the server side and smartfox server is known to be scalable. (although I'm not sure whether this approach is used widely by gaming industry or not, since this is not an mmorpg, I'm putting all player in the lobby). So my question is whether any of the free and community supported servers like reddwarf, firebase, BlazeDS etc can provide a similar architecture so that I can use server properties without many code changes? EDIT : I am not insisting on the exact same feature (thats asking too much!), but atleast a viable messaging system on the server so that I can send actionscript objects from the client using json/binary so that its fast. OR maybe some completely different way to implement what I need here. Thanks in advance.

    Read the article

  • Manage and Monitor Identity Ranges in SQL Server Transactional Replication

    - by Yaniv Etrogi
    Problem When using transactional replication to replicate data in a one way topology from a publisher to a read-only subscriber(s) there is no need to manage identity ranges. However, when using  transactional replication to replicate data in a two way replication topology - between two or more servers there is a need to manage identity ranges in order to prevent a situation where an INSERT commands fails on a PRIMARY KEY violation error  due to the replicated row being inserted having a value for the identity column which already exists at the destination database. Solution There are two ways to address this situation: Assign a range of identity values per each server. Work with parallel identity values. The first method requires some maintenance while the second method does not and so the scripts provided with this article are very useful for anyone using the first method. I will explore this in more detail later in the article. In the first solution set server1 to work in the range of 1 to 1,000,000,000 and server2 to work in the range of 1,000,000,001 to 2,000,000,000.  The ranges are set and defined using the DBCC CHECKIDENT command and when the ranges in this example are well maintained you meet the goal of preventing the INSERT commands to fall due to a PRIMARY KEY violation. The first insert at server1 will get the identity value of 1, the second insert will get the value of 2 and so on while on server2 the first insert will get the identity value of 1000000001, the second insert 1000000002 and so on thus avoiding a conflict. Be aware that when a row is inserted the identity value (seed) is generated as part of the insert command at each server and the inserted row is replicated. The replicated row includes the identity column’s value so the data remains consistent across all servers but you will be able to tell on what server the original insert took place due the range that  the identity value belongs to. In the second solution you do not manage ranges but enforce a situation in which identity values can never get overlapped by setting the first identity value (seed) and the increment property one time only during the CREATE TABLE command of each table. So a table on server1 looks like this: CREATE TABLE T1 (  c1 int NOT NULL IDENTITY(1, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); And a table on server2 looks like this: CREATE TABLE T1(  c1 int NOT NULL IDENTITY(2, 5) PRIMARY KEY CLUSTERED ,c2 int NOT NULL ); When these two tables are inserted the results of the identity values look like this: Server1:  1, 6, 11, 16, 21, 26… Server2:  2, 7, 12, 17, 22, 27… This assures no identity values conflicts while leaving a room for 3 additional servers to participate in this same environment. You can go up to 9 servers using this method by setting an increment value of 9 instead of 5 as I used in this example. Continues…

    Read the article

  • How to represent a tree structure in NoSQL

    - by Vlad Nicula
    I'm new to NoSQL and have been playing around with a personal project on the MEAN stack (Mongo ExpressJs AngularJs NodeJs). I'm building a document editor of sorts that manages nodes of data. Each document is actually a tree. I have a CRUD api for documents, to create new trees and a CRUD api for nodes in a given document. Right now the documents are represented as a collection that holds everything, including nodes. The children parent relationship is done by ids. So the nodes are an map by id, and each node has references to what nodes are their children. I chose this "flat" approach because it is easier to get a node by id from a document. Being used to having a relation table between nodes and documents, a relation table between nodes and children nodes I find it a bit weird that I have to save the entire "nodes" map each time I update a node. Is there a better way to represent such a data type in NoSQL?

    Read the article

< Previous Page | 882 883 884 885 886 887 888 889 890 891 892 893  | Next Page >