Search Results

Search found 32512 results on 1301 pages for 'object oriented analysis'.

Page 515/1301 | < Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >

  • Using Table-Valued Parameters With SQL Server Reporting Services

    - by Jesse
    In my last post I talked about using table-valued parameters to pass a list of integer values to a stored procedure without resorting to using comma-delimited strings and parsing out each value into a TABLE variable. In this post I’ll extend the “Customer Transaction Summary” report example to see how we might leverage this same stored procedure from within an SQL Server Reporting Services (SSRS) report. I’ve worked with SSRS off and on for the past several years and have generally found it to be a very useful tool for building nice-looking reports for end users quickly and easily. That said, I’ve been frustrated by SSRS from time to time when seemingly simple things are difficult to accomplish or simply not supported at all. I thought that using table-valued parameters from within a SSRS report would be simple, but unfortunately I was wrong. Customer Transaction Summary Example Let’s take the “Customer Transaction Summary” report example from the last post and try to plug that same stored procedure into an SSRS report. Our report will have three parameters: Start Date – beginning of the date range for which the report will summarize customer transactions End Date – end of the date range for which the report will summarize customer transactions Customer Ids – One or more customer Ids representing the customers that will be included in the report The simplest way to get started with this report will be to create a new dataset and point it at our Customer Transaction Summary report stored procedure (note that I’m using SSRS 2012 in the screenshots below, but there should be little to no difference with SSRS 2008): When you initially create this dataset the SSRS designer will try to invoke the stored procedure to determine what the parameters and output fields are for you automatically. As part of this process the following dialog pops-up: Obviously I can’t use this dialog to specify a value for the ‘@customerIds’ parameter since it is of the IntegerListTableType user-defined type that we created in the last post. Unfortunately this really throws the SSRS designer for a loop, and regardless of what combination of Data Type, Pass Null Value, or Parameter Value I used here, I kept getting this error dialog with the message, "Operand type clash: nvarchar is incompatible with IntegerListTableType". This error message makes some sense considering that the nvarchar type is indeed incompatible with the IntegerListTableType, but there’s little clue given as to how to remedy the situation. I don’t know for sure, but I think that behind-the-scenes the SSRS designer is trying to give the @customerIds parameter an nvarchar-typed SqlParameter which is causing the issue. When I first saw this error I figured that this might just be a limitation of the dataset designer and that I’d be able to work around the issue by manually defining the parameters. I know that there are some special steps that need to be taken when invoking a stored procedure with a table-valued parameter from ADO .NET, so I figured that I might be able to use some custom code embedded in the report  to create a SqlParameter instance with the needed properties and value to make this work, but the “Operand type clash" error message persisted. The Text Query Approach Just because we’re using a stored procedure to create the dataset for this report doesn’t mean that we can’t use the ‘Text’ Query Type option and construct an EXEC statement that will invoke the stored procedure. In order for this to work properly the EXEC statement will also need to declare and populate an IntegerListTableType variable to pass into the stored procedure. Before I go any further I want to make one point clear: this is a really ugly hack and it makes me cringe to do it. Simply put, I strongly feel that it should not be this difficult to use a table-valued parameter with SSRS. With that said, let’s take a look at what we’ll have to do to make this work. Manually Define Parameters First, we’ll need to manually define the parameters for report by right-clicking on the ‘Parameters’ folder in the ‘Report Data’ window. We’ll need to define the ‘@startDate’ and ‘@endDate’ as simple date parameters. We’ll also create a parameter called ‘@customerIds’ that will be a mutli-valued Integer parameter: In the ‘Available Values’ tab we’ll point this parameter at a simple dataset that just returns the CustomerId and CustomerName of each row in the Customers table of the database or manually define a handful of Customer Id values to make available when the report runs. Once we have these parameters properly defined we can take another crack at creating the dataset that will invoke the ‘rpt_CustomerTransactionSummary’ stored procedure. This time we’ll choose the ‘Text’ query type option and put the following into the ‘Query’ text area: 1: exec('declare @customerIdList IntegerListTableType ' + @customerIdInserts + 2: ' EXEC rpt_CustomerTransactionSummary 3: @startDate=''' + @startDate + ''', 4: @endDate='''+ @endDate + ''', 5: @customerIds=@customerIdList')   By using the ‘Text’ query type we can enter any arbitrary SQL that we we want to and then use parameters and string concatenation to inject pieces of that query at run time. It can be a bit tricky to parse this out at first glance, but from the SSRS designer’s point of view this query defines three parameters: @customerIdInserts – This will be a Text parameter that we use to define INSERT statements that will populate the @customerIdList variable that is being declared in the SQL. This parameter won’t actually ever get passed into the stored procedure. I’ll go into how this will work in a bit. @startDate – This is a simple date parameter that will get passed through directly into the @startDate parameter of the stored procedure on line 3. @endDate – This is another simple data parameter that will get passed through into the @endDate parameter of the stored procedure on line 4. At this point the dataset designer will be able to correctly parse the query and should even be able to detect the fields that the stored procedure will return without needing to specify any values for query when prompted to. Once the dataset has been correctly defined we’ll have a @customerIdInserts parameter listed in the ‘Parameters’ tab of the dataset designer. We need to define an expression for this parameter that will take the values selected by the user for the ‘@customerIds’ parameter that we defined earlier and convert them into INSERT statements that will populate the @customerIdList variable that we defined in our Text query. In order to do this we’ll need to add some custom code to our report using the ‘Report Properties’ dialog: Any custom code defined in the Report Properties dialog gets embedded into the .rdl of the report itself and (unfortunately) must be written in VB .NET. Note that you can also add references to custom .NET assemblies (which could be written in any language), but that’s outside the scope of this post so we’ll stick with the “quick and dirty” VB .NET approach for now. Here’s the VB .NET code (note that any embedded code that you add here must be defined in a static/shared function, though you can define as many functions as you want): 1: Public Shared Function BuildIntegerListInserts(ByVal variableName As String, ByVal paramValues As Object()) As String 2: Dim insertStatements As New System.Text.StringBuilder() 3: For Each paramValue As Object In paramValues 4: insertStatements.AppendLine(String.Format("INSERT {0} VALUES ({1})", variableName, paramValue)) 5: Next 6: Return insertStatements.ToString() 7: End Function   This method takes a variable name and an array of objects. We use an array of objects here because that is how SSRS will pass us the values that were selected by the user at run-time. The method uses a StringBuilder to construct INSERT statements that will insert each value from the object array into the provided variable name. Once this method has been defined in the custom code for the report we can go back into the dataset designer’s Parameters tab and update the expression for the ‘@customerIdInserts’ parameter by clicking on the button with the “function” symbol that appears to the right of the parameter value. We’ll set the expression to: 1: =Code.BuildIntegerListInserts("@customerIdList ", Parameters!customerIds.Value)   In order to invoke our custom code method we simply need to invoke “Code.<method name>” and pass in any needed parameters. The first parameter needs to match the name of the IntegerListTableType variable that we used in the EXEC statement of our query. The second parameter will come from the Value property of the ‘@customerIds’ parameter (this evaluates to an object array at run time). Finally, we’ll need to edit the properties of the ‘@customerIdInserts’ parameter on the report to mark it as a nullable internal parameter so that users aren’t prompted to provide a value for it when running the report. Limitations And Final Thoughts When I first started looking into the text query approach described above I wondered if there might be an upper limit to the size of the string that can be used to run a report. Obviously, the size of the actual query could increase pretty dramatically if you have a parameter that has a lot of potential values or you need to support several different table-valued parameters in the same query. I tested the example Customer Transaction Summary report with 1000 selected customers without any issue, but your mileage may vary depending on how much data you might need to pass into your query. If you think that the text query hack is a lot of work just to use a table-valued parameter, I agree! I think that it should be a lot easier than this to use a table-valued parameter from within SSRS, but so far I haven’t found a better way. It might be possible to create some custom .NET code that could build the EXEC statement for a given set of parameters automatically, but exploring that will have to wait for another post. For now, unless there’s a really compelling reason or requirement to use table-valued parameters from SSRS reports I would probably stick with the tried and true “join-multi-valued-parameter-to-CSV-and-split-in-the-query” approach for using mutli-valued parameters in a stored procedure.

    Read the article

  • Using Microsoft's Chart Controls In An ASP.NET Application: Serializing Chart Data

    In most usage scenarios, the data displayed in a Microsoft Chart control comes from some dynamic source, such as from a database query. The appearance of the chart can be modified dynamically, as well; past installments in this article series showed how to programmatically customize the axes, labels, and other appearance-related settings. However, it is possible to statically define the chart's data and appearance strictly through the control's declarative markup. One of the demos examined in the Getting Started article rendered a column chart with seven columns whose labels and values were defined statically in the <asp:Series> tag's <Points> collection. Given this functionality, it should come as no surprise that the Microsoft Chart Controls also support serialization. Serialization is the process of persisting the state of a control or an object to some other medium, such as to disk. Deserialization is the inverse process, and involves taking the persisted data and recreating the control or object. With just a few lines of code you can persist the appearance settings, the data, or both to a file on disk or to any stream. Likewise, it takes just a few lines of codes to reconstitute a chart from the persisted information. This article shows how to use the Microsoft Chart Control's serialization functionality by examining a demo application that allows users to create custom charts, specifying the data to plot and some appearance-related settings. The user can then save a "snapshot" of this chart, which persists its appearance and data to a record in a database. From another page, users can view these saved chart snapshots. Read on to learn more! Read More >

    Read the article

  • Ogre 3d and bullet physics interaction

    - by Tim
    I have been playing around with Ogre3d and trying to integrate bullet physics. I have previously somewhat successfully got this functionality working with irrlicht and bullet and I am trying to base this on what I had done there, but modifying it to fit with Ogre. It is working but not correctly and I would like some help to understand what it is I am doing wrong. I have a state system and when I enter the "gamestate" I call some functions such as setting up a basic scene, creating the physics simulation. I am doing that as follows. void GameState::enter() { ... // Setup Physics btBroadphaseInterface *BroadPhase = new btAxisSweep3(btVector3(-1000,-1000,-1000), btVector3(1000,1000,1000)); btDefaultCollisionConfiguration *CollisionConfiguration = new btDefaultCollisionConfiguration(); btCollisionDispatcher *Dispatcher = new btCollisionDispatcher(CollisionConfiguration); btSequentialImpulseConstraintSolver *Solver = new btSequentialImpulseConstraintSolver(); World = new btDiscreteDynamicsWorld(Dispatcher, BroadPhase, Solver, CollisionConfiguration); ... createScene(); } In the createScene method I add a light and try to setup a "ground" plane to act as the ground for things to collide with.. as follows. I expect there is issues with this as I get objects colliding with the ground but half way through it and they glitch around like crazy on collision. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); Transform.setOrigin(btVector3(0,1,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } I then have a method to create a cube and give it rigid body physics properties. I know there will be errors here as I get the items colliding with the ground but not with each other properly. So I would appreciate some input on what I am doing wrong. void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); size = boundingB.getSize(); //size /= 2.0f; // Only the half needed? //size *= 0.96f; // Bullet margin is a bit bigger so we need a smaller size entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btVector3 HalfExtents(TScale.getX()*0.5f,TScale.getY()*0.5f,TScale.getZ()*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } Then in the GameState::update() method which which runs every frame to handle input and render etc I call an UpdatePhysics method to update the physics simulation. void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // set rotation btVector3 EulerRotation; QuaternionToEuler(TObject->getOrientation(), EulerRotation); node->setOrientation(1,(Ogre::Real)EulerRotation[0], (Ogre::Real)EulerRotation[1], (Ogre::Real)EulerRotation[2]); //node->rotate(Ogre::Vector3(EulerRotation[0], EulerRotation[1], EulerRotation[2])); } } void GameState::QuaternionToEuler(const btQuaternion &TQuat, btVector3 &TEuler) { btScalar W = TQuat.getW(); btScalar X = TQuat.getX(); btScalar Y = TQuat.getY(); btScalar Z = TQuat.getZ(); float WSquared = W * W; float XSquared = X * X; float YSquared = Y * Y; float ZSquared = Z * Z; TEuler.setX(atan2f(2.0f * (Y * Z + X * W), -XSquared - YSquared + ZSquared + WSquared)); TEuler.setY(asinf(-2.0f * (X * Z - Y * W))); TEuler.setZ(atan2f(2.0f * (X * Y + Z * W), XSquared - YSquared - ZSquared + WSquared)); TEuler *= RADTODEG; } I seem to have issues with the cubes not colliding with each other and colliding strangely with the ground. I have tried to capture the effect with the attached image. I would appreciate any help in understanding what I have done wrong. Thanks. EDIT : Solution The following code shows the changes I made to get accurate physics. void GameState::createScene() { m_pSceneMgr->createLight("Light")->setPosition(75,75,75); // Physics // As a test we want a floor plane for things to collide with Ogre::Entity *ent; Ogre::Plane p; p.normal = Ogre::Vector3(0,1,0); p.d = 0; Ogre::MeshManager::getSingleton().createPlane( "FloorPlane", Ogre::ResourceGroupManager::DEFAULT_RESOURCE_GROUP_NAME, p, 200000, 200000, 20, 20, true, 1, 9000,9000,Ogre::Vector3::UNIT_Z); ent = m_pSceneMgr->createEntity("floor", "FloorPlane"); ent->setMaterialName("Test/Floor"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(ent); btTransform Transform; Transform.setIdentity(); // Fixed the transform vector here for y back to 0 to stop the objects sinking into the ground. Transform.setOrigin(btVector3(0,0,0)); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); btCollisionShape *Shape = new btStaticPlaneShape(btVector3(0,1,0),0); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(0, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(0, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; // End Physics } void GameState::CreateBox(const btVector3 &TPosition, const btVector3 &TScale, btScalar TMass) { Ogre::Vector3 size = Ogre::Vector3::ZERO; Ogre::Vector3 pos = Ogre::Vector3::ZERO; Ogre::Vector3 scale = Ogre::Vector3::ZERO; pos.x = TPosition.getX(); pos.y = TPosition.getY(); pos.z = TPosition.getZ(); scale.x = TScale.getX(); scale.y = TScale.getY(); scale.z = TScale.getZ(); Ogre::Entity *entity = m_pSceneMgr->createEntity( "Box" + Ogre::StringConverter::toString(m_pNumEntities), "cube.mesh"); entity->setCastShadows(true); Ogre::AxisAlignedBox boundingB = entity->getBoundingBox(); // The ogre bounding box is slightly bigger so I am reducing it for // use with the rigid body. size = boundingB.getSize()*0.95f; entity->setMaterialName("Test/Cube"); Ogre::SceneNode *node = m_pSceneMgr->getRootSceneNode()->createChildSceneNode(); node->attachObject(entity); node->setPosition(pos); node->showBoundingBox(true); //node->scale(scale); // Physics btTransform Transform; Transform.setIdentity(); Transform.setOrigin(TPosition); // Give it to the motion state btDefaultMotionState *MotionState = new btDefaultMotionState(Transform); // I got the size of the bounding box above but wasn't using it to set // the size for the rigid body. This now does. btVector3 HalfExtents(size.x*0.5f,size.y*0.5f,size.z*0.5f); btCollisionShape *Shape = new btBoxShape(HalfExtents); // Add Mass btVector3 LocalInertia; Shape->calculateLocalInertia(TMass, LocalInertia); // CReate the rigid body object btRigidBody *RigidBody = new btRigidBody(TMass, MotionState, Shape, LocalInertia); // Store a pointer to the Ogre Node so we can update it later RigidBody->setUserPointer((void *) (node)); // Add it to the physics world World->addRigidBody(RigidBody); Objects.push_back(RigidBody); m_pNumEntities++; } void GameState::UpdatePhysics(unsigned int TDeltaTime) { World->stepSimulation(TDeltaTime * 0.001f, 60); btRigidBody *TObject; for(std::vector<btRigidBody *>::iterator it = Objects.begin(); it != Objects.end(); ++it) { // Update renderer Ogre::SceneNode *node = static_cast<Ogre::SceneNode *>((*it)->getUserPointer()); TObject = *it; // Set position btVector3 Point = TObject->getCenterOfMassPosition(); node->setPosition(Ogre::Vector3((float)Point[0], (float)Point[1], (float)Point[2])); // Convert the bullet Quaternion to an Ogre quaternion btQuaternion btq = TObject->getOrientation(); Ogre::Quaternion quart = Ogre::Quaternion(btq.w(),btq.x(),btq.y(),btq.z()); // use the quaternion with setOrientation node->setOrientation(quart); } } The QuaternionToEuler function isn't needed so that was removed from code and header files. The objects now collide with the ground and each other appropriately.

    Read the article

  • Essential roles for web application team

    - by jromero
    Some friends of mine came up with an idea for a web application which we (so far) think could be great. I made the analysis and all the early stages of the development process and I'm about to start the coding. I'm talking about something that is barely a mid-level project, so I consider one developer (myself) should be enough. The thing is that we are trying to assign roles to each one of us so we can be focused on our duties and have clear our responsibilities within the team. We are a crew of four people, three of us (my friends) are business people who would do the marketing, customer relationship, management and accounting stuff and I'm basically the developer. I have in mind to get them involved into the development process by giving them documentation to write and use them as testers, all of that besides the management duties they have. Perhaps someone out there have been in the same situation, so I would appreciate if the experience is shared so we can effectively give ourselves positions in the project based on what I explained above. Which are the essential roles or the optimal team layout so the idea can be developed successfully? The question is not strictly about programming, but it's related to build a software entrepreneurship beyond the code, that is something that I'm sure plenty of us are looking. Any help is really appreciated! Regards.

    Read the article

  • CodePlex Daily Summary for Wednesday, October 03, 2012

    CodePlex Daily Summary for Wednesday, October 03, 2012Popular ReleasesSharePoint Column & View Permission: SharePoint Column and View Permission v1.5: Version 1.5 of this project. If you will find any bugs please let me know at enti@zoznam.sk or post your findings in Issue TrackerZ3: Z3 4.1.1 source code: Snapshot corresponding to version 4.1.1.DirectX Tool Kit: October 2012: October 2, 2012 Added ScreenGrab module Added CreateGeoSphere for drawing a geodesic sphere Put DDSTextureLoader and WICTextureLoader into the DirectX C++ namespace Renamed project files for better naming consistency Updated WICTextureLoader for Windows 8 96bpp floating-point formats Win32 desktop projects updated to use Windows Vista (0x0600) rather than Windows 7 (0x0601) APIs Tweaked SpriteBatch.cpp to workaround ARM NEON compiler codegen bugHome Access Plus+: v8.1: HAP+ Web v8.1.1003.000079318 Fixed: Issue with the Help Desk and updating a ticket as an admin 79319 Fixed: formatting issue with the booking system admin header 79321 Moved to using the arrow with a circle symbol on the homepage instead of the > and < 79541 Added: 480px wide mobile theme to login page 79541 Added: 480px wide mobile theme to home page 79541 Added: slide events for homepage 79553 Fixed: Booking System Multiple Lesson Bug 79553 Fixed: IE Error Message 79684 Fixed: jQuery issue ...System.Net.FtpClient: System.Net.FtpClient 2012.10.02: This is the first release of the new code base. It is not compatible with the old API, I repeat it is not a drop in update for projects currently using System.Net.FtpClient. New users should download this release. The old code base (Branch: System.Net.FtpClient_1) will continue to be supported while the new code matures. This release is a complete re-write of System.Net.FtpClient. The API and code are simpler than ever before. There are some new features included as well as an attempt at be...CRM 2011 Visual Ribbon Editor: Visual Ribbon Editor (1.3.1002.3): Visual Ribbon Editor 1.3.1002.3 What's New: Multi-language support for Labels/Tooltips for custom buttons and groups Support for base language other than English (1033) Connect dialog will not require organization name for ADFS / IFD connections Automatic creation of missing labels for all provisioned languages Minor connection issues fixed Notes: Before saving the ribbon to CRM server, editor will check Ribbon XML for any missing <Title> elements inside existing <LocLabel> elements...YAXLib: Yet Another XML Serialization Library for the .NET Framework: YAXLib 2.10: See change-log for the list of new features added and bugs fixedRenameApp: RenameApp 1.0: First release of RenameAppJsonToStaticTypeGenerator: JsonToStaticTypeGenerator 0.1: This is the first alpha release of JsonToStaticTypeGenerator.XiaoKyun: XiaoKyun V1.00: https://xiaokyun.codeplex.com/CatchThatException: Release 1.12: Wow a very fast change and a much better and faster writing to the text fileNaked Objects: Naked Objects Release 5.0.0: Corresponds to the packaged version 5.0.0 available via NuGet. Please note that the easiest way to install and run the Naked Objects Framework is via the NuGet package manager: just search the Official NuGet Package Source for 'nakedobjects'. It is only necessary to download the source code (from here) if you wish to modify or re-build the framework yourself. If you do wish to re-build the framework, consul the file HowToBuild.txt in the release. Major enhancementsNaked Objects 5.0 is desi...WinRT XAML Toolkit: WinRT XAML Toolkit - 1.3.0: WinRT XAML Toolkit based on the Windows 8 RTM SDK. Download the latest source from the SOURCE CODE page. For compiled version use NuGet. You can add it to your project in Visual Studio by going to View/Other Windows/Package Manager Console and entering: PM> Install-Package winrtxamltoolkit Features AsyncUI extensions Controls and control extensions Converters Debugging helpers Imaging IO helpers VisualTree helpers Samples Recent changes NOTE: Namespace changes DebugConsol...D3 Loot Tracker: 1.4.1: This version will automatically save a recording session on application exit if the user didn't stop the current session.SubExtractor: Release 1029: Feature: Added option to make i and ¡ characters movie-specific for improved OCR on Spanish subs (Special Characters tab in Options) Feature: Allow switch to Word Spacing dialog directly from Spell Check dialog Fix: Added more default word spacings for accented characters Fix: Changed Word Spacing dialog to show all OCR'd characters in current sub Fix: Removed application focus grab during OCR Fix: Tightened HD subs fuzzy logic to reduce false matches in small characters Fix: Improved Arrow k...MCEBuddy 2.x: MCEBuddy 2.2.18: Reccomended download Changelog for 2.2.18 (32bit and 64bit) 1. Added support for checking if Showanalyzer has hung and cancelling it 2. New version of comskip, 0.81.48 3. Speeding up comskip 4. Fixed a build bug in 64bit 2.2.17 5. Added a new comkip.ini, better commercial detection for international channels and less aggressive. Old one has been retained as comskip_old.ini 6. Added support for Audio Offset on Conversion Task page in GUI (this overrides the profiles AudioDelay when specified)Readable Passphrase Generator: KeePass Plugin 0.7.1: See the KeePass Plugin Step By Step Guide for instructions on how to install the plugin. Changes Built against KeePass 2.20Windows 8 Toolkit - Charts and More: Beta 1.0: The First Compiled Version of my LibraryPDF.NET: PDF.NET.Ver4.5-OpenSourceCode: PDF.NET Ver4.5 ????,????Web??????。 PDF.NET Ver4.5 Open Source Code,include a sample Web application project.Visual Studio Icon Patcher: Version 1.5.2: This version contains no new images from v1.5.1 Contains the following improvements: Better support for detecting the installed languages The extract & inject commands won’t run if Visual Studio is running You may now run in extract or inject mode The p/invoke code was cleaned up based on Code Analysis recommendations When a p/invoke method fails the Win32 error message is now displayed Error messages use red text Status messages use green textNew Projects.Net Exception Reporter: A reusable and extensible exception reporter for Microsoft .NET projects.Aesha Broker: A rich client Auction House Broker application. Built upon Blizzard's new REST API. Provides a client experience which caches historical auction data to provideASP.NET Friendly URLs: A library that enables automatic resolving of extensionless URLs to ASP.NET file-based handlers, e.g. ASPX pages.Astro Power CMS: Astro Power CMS build on GraffitiCMS, a product of Telligent. GraffitiCMS stop develop, I create this project with name is Astro Power CMSaTester: Here is a good place. And now, I can upload my soruce to it. It's very good.Automacao Residencial: O Netduino é uma plataforma onde voce utiliza a linguagem C# para controlar hardware. O objetivo é criar uma estrutura de comunicaçao com o netduino.Derbster: Explore and learn about modern C# architecture and programming by implementing software to support the modern game of roller derby. Dot FPE - A free Format-preserving encryption implementation for .net: There aren't any widely available implementations of a format-preserving encryption in .NET. Thus we aim to be the first!DotNetEx: .NET Framework extended functionality for data access, working with Tasks and asynchronous programming, encryption algorithms such as SkipJack and other stuff.Elemental Development Toolchain (.NET version): A complete toolchain built around the Æthere langauge.elFinder ASP.NET Connector: The one and only .NET connector for the amazing elFinder 2.X web-based file manager. Finally you can manage your files easily right from your browser!Geosynkronisering: Prosjekt for utarbeidelse av spesifikasjoner for grensesnitt som muliggjør synkronisering av datalager med geografisk datainnhold på tvers av ulike plattformerGIII_P1: Jesli wszyscy w Ciebie zwatpili pokaz ze sie mylili !IntroduceCompany: Website gi?i thi?u doanh nghi?p - công ty.JsonToStaticTypeGenerator: This is the JsonToStaticTypeGenerator project that gives the possibility to generate c# classes out of Json data.kwerty: Coming soonMachine Learning: My machine learning project. Just to figure out things...MicroManager: MaNGOS Web-based ManagerMvcContrib3: This is the version of mvccontrib which works with ASP.Net MVC 3Oracle Destination via ODP.Net (Custom Destination Component): SSIS 2008 R2 solution (custom destination component) to write to oracle via ODP.NetOrchard Commerce History with PayPal: Project expands on Nwazet.Commerce module (and is required for this module to work). Adds a purchase history, product role associations, and PayPal.Phoenix Trans: Web Phoenix Trans v?n t?i hàng hóa trong và ngoài nu?cPowerState: PowerState is .NET application for sending Wake-On-LAN (WOL) requests to computers. It can also shutdown, log off and reboot computers using the WMI.RenameApp: RenameApp is a free and very simple to use renaming software for Windows. RenameApp allows you to easily rename files based on the specified criteria and order.Rose-Hulman User Experience Design: This project will contain labs intended for use in Rose-Hulman's Computer Science and Software Engineering department.Server d? phòng: Ðây là server d? phòng, SharePoint BCS External Connector Caching Pattern Library: Library for enabling caching on SharePoint BCS external connectors. Enables BCS .Net Assemblies to be written that are scalable and performant for search.SharpDX.WPF: This projects provides a DirectX 9, DirectX 10 and DirectX 11 support for WPF. The assembly contains DXElement - an easy to use WPF-FrameworkElement.Simple Password Generator Library: The password generator library, written in C#, is a simple assembly which allow generation of passwords with length anywhere from 1-99.SisEagle.NET: Esse sistema foi desenvolvido pra fins de apresentação do TCC referente ao ano de 2012 na UDF-BrasiliaSWebshop: SWebshop is a PHP based webshop system which allows you to insert, edit and delete data easily and is easy to use for customers.Tabular Database Powershell Cmdlets: This project provides a sample of PowerShell Cmdlets to manage Tabular models, from Analysis Services.University timetable using java: the project is using java language to create timetable (full timetable with exam tables and labs tables) and it will be free for all users with sql databaseURLShoter: This project for shorting URL for ASP.NETWeb Input Form Control: This control allow developer to create the input form by configuring the control in html modeWeibo: rtWorkoutMemo: Project descritpion(first draft): Memorise your workout. Keep archive records of your daily trening such: - series of excercise, - quantity of each serie, - weWPF - Automate Acrobat Security Policy: This WPF Tool was created to quickly password protect batches of PDF documents, using a random generator to generate the passwords.XiaoKyun: Hello Page for Web.Z3: Z3 is a high-performance theorem prover being developed at Microsoft Research.

    Read the article

  • Oracle Spatial User Conference, Directions, and the US Census

    - by stephen.garth
    This year's Oracle Spatial User Conference should be a winner, featuring new workshops and case studies presented by Oracle Spatial customers on applications as diverse as natural resource management, gold mining, the growing of wine grapes, and the United States Census. This podcast by Directions Media, official media sponsor of the Oracle Spatial User Conference, provides a glimpse of what's in store at the conference. In the podcast, Directions interviewed senior cartographers from the US Census Bureau to explore the enormous challenges of database management, mapping and spatial analysis associated with the 2010 US Census. The Oracle Spatial User Conference is in Phoenix, AZ on April 29, held in conjunction with the GITA Geospatial Infrastructure Solutions Conference. Register for the Oracle Spatial User Conference Listen to the Directions podcast on the 2010 US Census Find out more about Oracle Spatial var gaJsHost = (("https:" == document.location.protocol) ? "https://ssl." : "http://www."); document.write(unescape("%3Cscript src='" + gaJsHost + "google-analytics.com/ga.js' type='text/javascript'%3E%3C/script%3E")); try { var pageTracker = _gat._getTracker("UA-13185312-1"); pageTracker._trackPageview(); } catch(err) {}

    Read the article

  • What is the best way to build a database from a MS Word document?

    - by Jayron Soares
    Please advise me on how to approach this problem: I have a sequential list of metadata in a document in MS Word. The basic idea is to create a Python algorithm to iterate over the information, retrieving just the name of the PROCESS, when is made a queue, from a database. Example metadata: Process: Process Walker (1965) Exact reference: Walker Process Equipment., Inc. v. Food Machinery Corp. Link: http://caselaw.lp.findlaw.com/scripts/getcase.pl?court=US&vol=382&invol= Type of procedure: Certiorari to the United States Court of Appeals for the Seventh Circuit. Parties: Walker Process Equipment, Inc. Sector: Systems is ... Start Date: October 12-13 Arguedas, 1965 Summary: Food Machinery Company has initiated a process to stop or slow the entry of competitors through the use of a patent obtained by fraud. The case concerned a patent on "knee action swing diffusers" used in aeration equipment for sewage treatment systems, and the question was whether "the maintenance and enforcement of a patent obtained by fraud before the patent office" may be a basis for antitrust punishment. Report of the evolution process: petitioner, in answer to respond... Importance: a) First case which established an analysis for the diagnosis of dispute… There are about 200 pages containing the information above. I have in mind the idea of implementing an algorithm in Python to be able to break this information sequence and try to store it in a web database (an open source application that I’m looking for) in order to allow for free consultations.

    Read the article

  • Security exception in Twitterizer

    - by Raghu
    Hi, We are using Twitterizer for Twitter integration to get the Tweets details. When making call to the method OAuthUtility.GetRequestToken, following exception is coming. System.Security.SecurityException: Request for the permission of type 'System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed. When the application is hosted on IIS 5, the application works fine and the above error is coming only when the application is hosted in IIS 7 on Windows 2008 R2. and the method OAuthUtility.GetRequestToken throws above exception. It seems the issue is something with code access security. Please suggest what kind of permissions should be given to fix the security exception. The application has the Full Trust and I have even tried by registering the Twitterizer DLL in GAC and still the same error is coming. I am not sure what makes the difference between IIS 5 and IIS 7 with regards to code access security to cause that exception. Following is the stack track of the exception. [SecurityException: Request for the permission of type 'System.Net.WebPermission, System, Version=2.0.0.0, Culture=neutral, PublicKeyToken=b77a5c561934e089' failed.] System.Security.CodeAccessSecurityEngine.Check(Object demand, StackCrawlMark& stackMark, Boolean isPermSet) +0 System.Security.CodeAccessPermission.Demand() +54 Twitterizer.OAuthUtility.ExecuteRequest(String baseUrl, Dictionary`2 parameters, HTTPVerb verb, String consumerKey, String consumerSecret, String token, String tokenSecret, WebProxy proxy) +224 Twitterizer.OAuthUtility.GetRequestToken(String consumerKey, String consumerSecret, String callbackAddress, WebProxy proxy) +238 Twitter._Default.btnSubmit_Click(Object sender, EventArgs e) +94 System.Web.UI.WebControls.Button.OnClick(EventArgs e) +115 System.Web.UI.WebControls.Button.RaisePostBackEvent(String eventArgument) +140 System.Web.UI.Page.RaisePostBackEvent(IPostBackEventHandler sourceControl, String eventArgument) +29 System.Web.UI.Page.ProcessRequestMain(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11045655 System.Web.UI.Page.ProcessRequest(Boolean includeStagesBeforeAsyncPoint, Boolean includeStagesAfterAsyncPoint) +11045194 System.Web.UI.Page.ProcessRequest() +91 System.Web.UI.Page.ProcessRequest(HttpContext context) +240 ASP.authorization_aspx.ProcessRequest(HttpContext context) in c:\Windows\Microsoft.NET\Framework64\v2.0.50727\Temporary ASP.NET Files\twitter\c2fd5853\dcb96ae9\App_Web_y_ada-ix.0.cs:0 System.Web.CallHandlerExecutionStep.System.Web.HttpApplication.IExecutionStep.Execute() +599 System.Web.HttpApplication.ExecuteStep(IExecutionStep step, Boolean& completedSynchronously) +171 Any help would be greatly appreciated. Thanks in advance. Regards, Raghu

    Read the article

  • Organizing an entity system with external component managers?

    - by Gustav
    I'm designing a game engine for a top-down multiplayer 2D shooter game, which I want to be reasonably reuseable for other top-down shooter games. At the moment I'm thinking about how something like an entity system in it should be designed. First I thought about this: I have a class called EntityManager. It should implement a method called Update and another one called Draw. The reason for me separating Logic and Rendering is because then I can omit the Draw method if running a standalone server. EntityManager owns a list of objects of type BaseEntity. Each entity owns a list of components such as EntityModel (the drawable representation of an entity), EntityNetworkInterface, and EntityPhysicalBody. EntityManager also owns a list of component managers like EntityRenderManager, EntityNetworkManager and EntityPhysicsManager. Each component manager keeps references to the entity components. There are various reasons for moving this code out of the entity's own class and do it collectively instead. For example, I'm using an external physics library, Box2D, for the game. In Box2D, you first add the bodies and shapes to a world (owned by the EntityPhysicsManager in this case) and add collision callbacks (which would be dispatched to the entity object itself in my system). Then you run a function which simulates everything in the system. I find it hard to find a better solution to do this than doing it in an external component manager like this. Entity creation is done like this: EntityManager implements the method RegisterEntity(entityClass, factory) which registers how to create an entity if that class. It also implements the method CreateEntity(entityClass) which would return an object of type BaseEntity. Well now comes my problem: How would the reference to a component be registered to the component managers? I have no idea how I would reference the component managers from a factory/closure.

    Read the article

  • Community Events and Workshops in November 2012 #ssas #tabular #powerpivot

    - by Marco Russo (SQLBI)
    I and Alberto have a busy agenda until the end of the month, but if you are based in Northern Europe there are many chance to meet one of us in the next couple of weeks! Belgium, 20 November 2012 – SQL Server Days 2012 with Marco Russo I will present two sessions in this conference, “Data Modeling for Tabular” and “Querying and Optimizing DAX” Copenhagen, 21-22 November, 2012 – SSAS Tabular Workshop with Alberto Ferrari Alberto will be the speaker for 2 days – you can still register if you want a full immersion! Copenhagen, 21 November 2012 – Free Community Event with Alberto Ferrari (hosted in Microsoft Hellerup) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Munich, 27-28 November 2012 - SSAS Tabular Workshop with Alberto Ferrari The SSAS workshop will run also in Germany, this time in Munich. Also here there is still some seat still available. Munich, 27 November 2012 - Free Community Event with Alberto Ferrari (hosted in Microsoft ) In the evening Alberto will present “Excel 2013 PowerPivot in Action” Moscow, 27-28 November 2012 – TechEd Russia 2012 with Marco Russo I will speak during the keynote on November 27 and I will present two session the day after, “Developing an Analysis Services Tabular Project BI Semantic Model” and “Excel 2013 PowerPivot in Action” Stockholm, 29-30 November 2012 - SSAS Tabular Workshop with Marco Russo I will run this workshop in Stockholm – if you want to register here, hurry up! Few seats still available! Stockholm, 29 November 2012 - Free Community Event (sold-out!) with Marco Russo In the evening I will present “Excel 2013 PowerPivot in Action” If you want to attend a SSAS Tabular Workshop online, you can also register to the Online edition of December 5-6, 2012, which is still in early bird and is scheduled with a friendly time zone for America’s countries (which could be good for Europe too, in case you don’t mind attending a workshop until midnight!).

    Read the article

  • Where's my MD.070?

    - by Dave Burke
    In a previous Blog entry titled “Where’s My MD.050” I discussed how the OUM Analysis Specification is the “new-and-improved” version of the more traditional Functional Design Document (or MD.050 for Oracle AIM stalwarts). In a similar way, the OUM Design Specification is an evolution of what we used to call the Technical Design Document (or MD.070). Let’s dig a little deeper…… In a traditional software development process, the “Design Task” would include all the time and resources required to design the software component(s), AND to create the final Technical Design Document. However, in OUM, we have created distinct Tasks for pure design work, along with an optional Task for pulling all of that work together into a Design Specification. Some of the Design Tasks shown above will result in their own Work Products (i.e. an Architecture Description), whilst other Tasks would act as “placeholders” for a specific work effort. In any event, the DS.140 Design Specification can include a combination of unique content, along with links to other Work Products, together which enable a complete technical description of the component, or solution, being designed. So next time someone asks “where’s my MD.070” the short answer would be to tell them to read the OUM Task description for DS.140 – Design Specification!

    Read the article

  • Performance Optimization &ndash; It Is Faster When You Can Measure It

    - by Alois Kraus
    Performance optimization in bigger systems is hard because the measured numbers can vary greatly depending on the measurement method of your choice. To measure execution timing of specific methods in your application you usually use Time Measurement Method Potential Pitfalls Stopwatch Most accurate method on recent processors. Internally it uses the RDTSC instruction. Since the counter is processor specific you can get greatly different values when your thread is scheduled to another core or the core goes into a power saving mode. But things do change luckily: Intel's Designer's vol3b, section 16.11.1 "16.11.1 Invariant TSC The time stamp counter in newer processors may support an enhancement, referred to as invariant TSC. Processor's support for invariant TSC is indicated by CPUID.80000007H:EDX[8]. The invariant TSC will run at a constant rate in all ACPI P-, C-. and T-states. This is the architectural behavior moving forward. On processors with invariant TSC support, the OS may use the TSC for wall clock timer services (instead of ACPI or HPET timers). TSC reads are much more efficient and do not incur the overhead associated with a ring transition or access to a platform resource." DateTime.Now Good but it has only a resolution of 16ms which can be not enough if you want more accuracy.   Reporting Method Potential Pitfalls Console.WriteLine Ok if not called too often. Debug.Print Are you really measuring performance with Debug Builds? Shame on you. Trace.WriteLine Better but you need to plug in some good output listener like a trace file. But be aware that the first time you call this method it will read your app.config and deserialize your system.diagnostics section which does also take time.   In general it is a good idea to use some tracing library which does measure the timing for you and you only need to decorate some methods with tracing so you can later verify if something has changed for the better or worse. In my previous article I did compare measuring performance with quantum mechanics. This analogy does work surprising well. When you measure a quantum system there is a lower limit how accurately you can measure something. The Heisenberg uncertainty relation does tell us that you cannot measure of a quantum system the impulse and location of a particle at the same time with infinite accuracy. For programmers the two variables are execution time and memory allocations. If you try to measure the timings of all methods in your application you will need to store them somewhere. The fastest storage space besides the CPU cache is the memory. But if your timing values do consume all available memory there is no memory left for the actual application to run. On the other hand if you try to record all memory allocations of your application you will also need to store the data somewhere. This will cost you memory and execution time. These constraints are always there and regardless how good the marketing of tool vendors for performance and memory profilers are: Any measurement will disturb the system in a non predictable way. Commercial tool vendors will tell you they do calculate this overhead and subtract it from the measured values to give you the most accurate values but in reality it is not entirely true. After falling into the trap to trust the profiler timings several times I have got into the habit to Measure with a profiler to get an idea where potential bottlenecks are. Measure again with tracing only the specific methods to check if this method is really worth optimizing. Optimize it Measure again. Be surprised that your optimization has made things worse. Think harder Implement something that really works. Measure again Finished! - Or look for the next bottleneck. Recently I have looked into issues with serialization performance. For serialization DataContractSerializer was used and I was not sure if XML is really the most optimal wire format. After looking around I have found protobuf-net which uses Googles Protocol Buffer format which is a compact binary serialization format. What is good for Google should be good for us. A small sample app to check out performance was a matter of minutes: using ProtoBuf; using System; using System.Diagnostics; using System.IO; using System.Reflection; using System.Runtime.Serialization; [DataContract, Serializable] class Data { [DataMember(Order=1)] public int IntValue { get; set; } [DataMember(Order = 2)] public string StringValue { get; set; } [DataMember(Order = 3)] public bool IsActivated { get; set; } [DataMember(Order = 4)] public BindingFlags Flags { get; set; } } class Program { static MemoryStream _Stream = new MemoryStream(); static MemoryStream Stream { get { _Stream.Position = 0; _Stream.SetLength(0); return _Stream; } } static void Main(string[] args) { DataContractSerializer ser = new DataContractSerializer(typeof(Data)); Data data = new Data { IntValue = 100, IsActivated = true, StringValue = "Hi this is a small string value to check if serialization does work as expected" }; var sw = Stopwatch.StartNew(); int Runs = 1000 * 1000; for (int i = 0; i < Runs; i++) { //ser.WriteObject(Stream, data); Serializer.Serialize<Data>(Stream, data); } sw.Stop(); Console.WriteLine("Did take {0:N0}ms for {1:N0} objects", sw.Elapsed.TotalMilliseconds, Runs); Console.ReadLine(); } } The results are indeed promising: Serializer Time in ms N objects protobuf-net   807 1000000 DataContract 4402 1000000 Nearly a factor 5 faster and a much more compact wire format. Lets use it! After switching over to protbuf-net the transfered wire data has dropped by a factor two (good) and the performance has worsened by nearly a factor two. How is that possible? We have measured it? Protobuf-net is much faster! As it turns out protobuf-net is faster but it has a cost: For the first time a type is de/serialized it does use some very smart code-gen which does not come for free. Lets try to measure this one by setting of our performance test app the Runs value not to one million but to 1. Serializer Time in ms N objects protobuf-net 85 1 DataContract 24 1 The code-gen overhead is significant and can take up to 200ms for more complex types. The break even point where the code-gen cost is amortized by its faster serialization performance is (assuming small objects) somewhere between 20.000-40.000 serialized objects. As it turned out my specific scenario involved about 100 types and 1000 serializations in total. That explains why the good old DataContractSerializer is not so easy to take out of business. The final approach I ended up was to reduce the number of types and to serialize primitive types via BinaryWriter directly which turned out to be a pretty good alternative. It sounded good until I measured again and found that my optimizations so far do not help much. After looking more deeper at the profiling data I did found that one of the 1000 calls did take 50% of the time. So how do I find out which call it was? Normal profilers do fail short at this discipline. A (totally undeserved) relatively unknown profiler is SpeedTrace which does unlike normal profilers create traces of your applications by instrumenting your IL code at runtime. This way you can look at the full call stack of the one slow serializer call to find out if this stack was something special. Unfortunately the call stack showed nothing special. But luckily I have my own tracing as well and I could see that the slow serializer call did happen during the serialization of a bool value. When you encounter after much analysis something unreasonable you cannot explain it then the chances are good that your thread was suspended by the garbage collector. If there is a problem with excessive GCs remains to be investigated but so far the serialization performance seems to be mostly ok.  When you do profile a complex system with many interconnected processes you can never be sure that the timings you just did measure are accurate at all. Some process might be hitting the disc slowing things down for all other processes for some seconds as well. There is a big difference between warm and cold startup. If you restart all processes you can basically forget the first run because of the OS disc cache, JIT and GCs make the measured timings very flexible. When you are in need of a random number generator you should measure cold startup times of a sufficiently complex system. After the first run you can try again getting different and much lower numbers. Now try again at least two times to get some feeling how stable the numbers are. Oh and try to do the same thing the next day. It might be that the bottleneck you found yesterday is gone today. Thanks to GC and other random stuff it can become pretty hard to find stuff worth optimizing if no big bottlenecks except bloatloads of code are left anymore. When I have found a spot worth optimizing I do make the code changes and do measure again to check if something has changed. If it has got slower and I am certain that my change should have made it faster I can blame the GC again. The thing is that if you optimize stuff and you allocate less objects the GC times will shift to some other location. If you are unlucky it will make your faster working code slower because you see now GCs at times where none were before. This is where the stuff does get really tricky. A safe escape hatch is to create a repro of the slow code in an isolated application so you can change things fast in a reliable manner. Then the normal profilers do also start working again. As Vance Morrison does point out it is much more complex to profile a system against the wall clock compared to optimize for CPU time. The reason is that for wall clock time analysis you need to understand how your system does work and which threads (if you have not one but perhaps 20) are causing a visible delay to the end user and which threads can wait a long time without affecting the user experience at all. Next time: Commercial profiler shootout.

    Read the article

  • DocumentDB - Another Azure NoSQL Storage Service

    - by Shaun
    Originally posted on: http://geekswithblogs.net/shaunxu/archive/2014/08/25/documentdb---another-azure-nosql-storage-service.aspxMicrosoft just released a bunch of new features for Azure on 22nd and one of them I was interested in most is DocumentDB, a document NoSQL database service on the cloud.   Quick Look at DocumentDB We can try DocumentDB from the new azure preview portal. Just click the NEW button and select the item named DocumentDB to create a new account. Specify the name of the DocumentDB, which will be the endpoint we are going to use to connect later. Select the capacity unit, resource group and subscription. In resource group section we can select which region our DocumentDB will be located. Same as other azure services select the same location with your consumers of the DocumentDB, for example the website, web services, etc.. After several minutes the DocumentDB will be ready. Click the KEYS button we can find the URI and primary key, which will be used when connecting. Now let's open Visual Studio and try to use the DocumentDB we had just created. Create a new console application and install the DocumentDB .NET client library from NuGet with the keyword "DocumentDB". You need to select "Include Prerelase" in NuGet Package Manager window since this library was not yet released. Next we will create a new database and document collection under our DocumentDB account. The code below created an instance of DocumentClient with the URI and primary key we just copied from azure portal, and create a database and collection. And it also prints the document and collection link string which will be used later to insert and query documents. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7: Run(client).Wait(); 8:  9: Console.WriteLine("done"); 10: Console.ReadKey(); 11: } 12:  13: static async Task Run(DocumentClient client) 14: { 15:  16: var database = new Database() { Id = "testdb" }; 17: database = await client.CreateDatabaseAsync(database); 18: Console.WriteLine("database link = {0}", database.SelfLink); 19:  20: var collection = new DocumentCollection() { Id = "testcol" }; 21: collection = await client.CreateDocumentCollectionAsync(database.SelfLink, collection); 22: Console.WriteLine("collection link = {0}", collection.SelfLink); 23: } Below is the result from the console window. We need to copy the collection link string for future usage. Now if we back to the portal we will find a database was listed with the name we specified in the code. Next we will insert a document into the database and collection we had just created. In the code below we pasted the collection link which copied in previous step, create a dynamic object with several properties defined. As you can see we can add some normal properties contains string, integer, we can also add complex property for example an array, a dictionary and an object reference, unless they can be serialized to JSON. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: // collection link pasted from the result in previous demo 9: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 10:  11: // document we are going to insert to database 12: dynamic doc = new ExpandoObject(); 13: doc.firstName = "Shaun"; 14: doc.lastName = "Xu"; 15: doc.roles = new string[] { "developer", "trainer", "presenter", "father" }; 16:  17: // insert the docuemnt 18: InsertADoc(client, collectionLink, doc).Wait(); 19:  20: Console.WriteLine("done"); 21: Console.ReadKey(); 22: } the insert code will be very simple as below, just provide the collection link and the object we are going to insert. 1: static async Task InsertADoc(DocumentClient client, string collectionLink, dynamic doc) 2: { 3: var document = await client.CreateDocumentAsync(collectionLink, doc); 4: Console.WriteLine(await JsonConvert.SerializeObjectAsync(document, Formatting.Indented)); 5: } Below is the result after the object had been inserted. Finally we will query the document from the database and collection. Similar to the insert code, we just need to specify the collection link so that the .NET SDK will help us to retrieve all documents in it. 1: static void Main(string[] args) 2: { 3: var endpoint = new Uri("https://shx.documents.azure.com:443/"); 4: var key = "LU2NoyS2fH0131TGxtBE4DW/CjHQBzAaUx/mbuJ1X77C4FWUG129wWk2oyS2odgkFO2Xdif9/ZddintQicF+lA=="; 5:  6: var client = new DocumentClient(endpoint, key); 7:  8: var collectionLink = "dbs/AAk3AA==/colls/AAk3AP6oFgA=/"; 9:  10: SelectDocs(client, collectionLink); 11:  12: Console.WriteLine("done"); 13: Console.ReadKey(); 14: } 15:  16: static void SelectDocs(DocumentClient client, string collectionLink) 17: { 18: var docs = client.CreateDocumentQuery(collectionLink + "docs/").ToList(); 19: foreach(var doc in docs) 20: { 21: Console.WriteLine(doc); 22: } 23: } Since there's only one document in my collection below is the result when I executed the code. As you can see all properties, includes the array was retrieve at the same time. DocumentDB also attached some properties we didn't specified such as "_rid", "_ts", "_self" etc., which is controlled by the service.   DocumentDB Benefit DocumentDB is a document NoSQL database service. Different from the traditional database, document database is truly schema-free. In a short nut, you can save anything in the same database and collection if it could be serialized to JSON. We you query the document database, all sub documents will be retrieved at the same time. This means you don't need to join other tables when using a traditional database. Document database is very useful when we build some high performance system with hierarchical data structure. For example, assuming we need to build a blog system, there will be many blog posts and each of them contains the content and comments. The comment can be commented as well. If we were using traditional database, let's say SQL Server, the database schema might be defined as below. When we need to display a post we need to load the post content from the Posts table, as well as the comments from the Comments table. We also need to build the comment tree based on the CommentID field. But if were using DocumentDB, what we need to do is to save the post as a document with a list contains all comments. Under a comment all sub comments will be a list in it. When we display this post we just need to to query the post document, the content and all comments will be loaded in proper structure. 1: { 2: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 3: "title": "xxxxx", 4: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 5: "postedOn": "08/25/2014 13:55", 6: "comments": 7: [ 8: { 9: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 10: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 11: "commentedOn": "08/25/2014 14:00", 12: "commentedBy": "xxx" 13: }, 14: { 15: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 16: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 17: "commentedOn": "08/25/2014 14:10", 18: "commentedBy": "xxx", 19: "comments": 20: [ 21: { 22: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 23: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 24: "commentedOn": "08/25/2014 14:18", 25: "commentedBy": "xxx", 26: "comments": 27: [ 28: { 29: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 30: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 31: "commentedOn": "08/25/2014 18:22", 32: "commentedBy": "xxx", 33: } 34: ] 35: }, 36: { 37: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 38: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 39: "commentedOn": "08/25/2014 15:02", 40: "commentedBy": "xxx", 41: } 42: ] 43: }, 44: { 45: "id": "xxxxx-xxxxx-xxxxx-xxxxx", 46: "content": "xxxxx, xxxxxxxxx. xxxxxx, xx, xxxx.", 47: "commentedOn": "08/25/2014 14:30", 48: "commentedBy": "xxx" 49: } 50: ] 51: }   DocumentDB vs. Table Storage DocumentDB and Table Storage are all NoSQL service in Microsoft Azure. One common question is "when we should use DocumentDB rather than Table Storage". Here are some ideas from me and some MVPs. First of all, they are different kind of NoSQL database. DocumentDB is a document database while table storage is a key-value database. Second, table storage is cheaper. DocumentDB supports scale out from one capacity unit to 5 in preview period and each capacity unit provides 10GB local SSD storage. The price is $0.73/day includes 50% discount. For storage service the highest price is $0.061/GB, which is almost 10% of DocumentDB. Third, table storage provides local-replication, geo-replication, read access geo-replication while DocumentDB doesn't support. Fourth, there is local emulator for table storage but none for DocumentDB. We have to connect to the DocumentDB on cloud when developing locally. But, DocumentDB supports some cool features that table storage doesn't have. It supports store procedure, trigger and user-defined-function. It supports rich indexing while table storage only supports indexing against partition key and row key. It supports transaction, table storage supports as well but restricted with Entity Group Transaction scope. And the last, table storage is GA but DocumentDB is still in preview.   Summary In this post I have a quick demonstration and introduction about the new DocumentDB service in Azure. It's very easy to interact through .NET and it also support REST API, Node.js SDK and Python SDK. Then I explained the concept and benefit of  using document database, then compared with table storage.   Hope this helps, Shaun All documents and related graphics, codes are provided "AS IS" without warranty of any kind. Copyright © Shaun Ziyan Xu. This work is licensed under the Creative Commons License.

    Read the article

  • The fastest way to resize images from ASP.NET. And it’s (more) supported-ish.

    - by Bertrand Le Roy
    I’ve shown before how to resize images using GDI, which is fairly common but is explicitly unsupported because we know of very real problems that this can cause. Still, many sites still use that method because those problems are fairly rare, and because most people assume it’s the only way to get the job done. Plus, it works in medium trust. More recently, I’ve shown how you can use WPF APIs to do the same thing and get JPEG thumbnails, only 2.5 times faster than GDI (even now that GDI really ultimately uses WIC to read and write images). The boost in performance is great, but it comes at a cost, that you may or may not care about: it won’t work in medium trust. It’s also just as unsupported as the GDI option. What I want to show today is how to use the Windows Imaging Components from ASP.NET APIs directly, without going through WPF. The approach has the great advantage that it’s been tested and proven to scale very well. The WIC team tells me you should be able to call support and get answers if you hit problems. Caveats exist though. First, this is using interop, so until a signed wrapper sits in the GAC, it will require full trust. Second, the APIs have a very strong smell of native code and are definitely not .NET-friendly. And finally, the most serious problem is that older versions of Windows don’t offer MTA support for image decoding. MTA support is only available on Windows 7, Vista and Windows Server 2008. But on 2003 and XP, you’ll only get STA support. that means that the thread safety that we so badly need for server applications is not guaranteed on those operating systems. To make it work, you’d have to spin specialized threads yourself and manage the lifetime of your objects, which is outside the scope of this article. We’ll assume that we’re fine with al this and that we’re running on 7 or 2008 under full trust. Be warned that the code that follows is not simple or very readable. This is definitely not the easiest way to resize an image in .NET. Wrapping native APIs such as WIC in a managed wrapper is never easy, but fortunately we won’t have to: the WIC team already did it for us and released the results under MS-PL. The InteropServices folder, which contains the wrappers we need, is in the WicCop project but I’ve also included it in the sample that you can download from the link at the end of the article. In order to produce a thumbnail, we first have to obtain a decoding frame object that WIC can use. Like with WPF, that object will contain the command to decode a frame from the source image but won’t do the actual decoding until necessary. Getting the frame is done by reading the image bytes through a special WIC stream that you can obtain from a factory object that we’re going to reuse for lots of other tasks: var photo = File.ReadAllBytes(photoPath); var factory = (IWICComponentFactory)new WICImagingFactory(); var inputStream = factory.CreateStream(); inputStream.InitializeFromMemory(photo, (uint)photo.Length); var decoder = factory.CreateDecoderFromStream( inputStream, null, WICDecodeOptions.WICDecodeMetadataCacheOnLoad); var frame = decoder.GetFrame(0); We can read the dimensions of the frame using the following (somewhat ugly) code: uint width, height; frame.GetSize(out width, out height); This enables us to compute the dimensions of the thumbnail, as I’ve shown in previous articles. We now need to prepare the output stream for the thumbnail. WIC requires a special kind of stream, IStream (not implemented by System.IO.Stream) and doesn’t directlyunderstand .NET streams. It does provide a number of implementations but not exactly what we need here. We need to output to memory because we’ll want to persist the same bytes to the response stream and to a local file for caching. The memory-bound version of IStream requires a fixed-length buffer but we won’t know the length of the buffer before we resize. To solve that problem, I’ve built a derived class from MemoryStream that also implements IStream. The implementation is not very complicated, it just delegates the IStream methods to the base class, but it involves some native pointer manipulation. Once we have a stream, we need to build the encoder for the output format, which could be anything that WIC supports. For web thumbnails, our only reasonable options are PNG and JPEG. I explored PNG because it’s a lossless format, and because WIC does support PNG compression. That compression is not very efficient though and JPEG offers good quality with much smaller file sizes. On the web, it matters. I found the best PNG compression option (adaptive) to give files that are about twice as big as 100%-quality JPEG (an absurd setting), 4.5 times bigger than 95%-quality JPEG and 7 times larger than 85%-quality JPEG, which is more than acceptable quality. As a consequence, we’ll use JPEG. The JPEG encoder can be prepared as follows: var encoder = factory.CreateEncoder( Consts.GUID_ContainerFormatJpeg, null); encoder.Initialize(outputStream, WICBitmapEncoderCacheOption.WICBitmapEncoderNoCache); The next operation is to create the output frame: IWICBitmapFrameEncode outputFrame; var arg = new IPropertyBag2[1]; encoder.CreateNewFrame(out outputFrame, arg); Notice that we are passing in a property bag. This is where we’re going to specify our only parameter for encoding, the JPEG quality setting: var propBag = arg[0]; var propertyBagOption = new PROPBAG2[1]; propertyBagOption[0].pstrName = "ImageQuality"; propBag.Write(1, propertyBagOption, new object[] { 0.85F }); outputFrame.Initialize(propBag); We can then set the resolution for the thumbnail to be 96, something we weren’t able to do with WPF and had to hack around: outputFrame.SetResolution(96, 96); Next, we set the size of the output frame and create a scaler from the input frame and the computed dimensions of the target thumbnail: outputFrame.SetSize(thumbWidth, thumbHeight); var scaler = factory.CreateBitmapScaler(); scaler.Initialize(frame, thumbWidth, thumbHeight, WICBitmapInterpolationMode.WICBitmapInterpolationModeFant); The scaler is using the Fant method, which I think is the best looking one even if it seems a little softer than cubic (zoomed here to better show the defects): Cubic Fant Linear Nearest neighbor We can write the source image to the output frame through the scaler: outputFrame.WriteSource(scaler, new WICRect { X = 0, Y = 0, Width = (int)thumbWidth, Height = (int)thumbHeight }); And finally we commit the pipeline that we built and get the byte array for the thumbnail out of our memory stream: outputFrame.Commit(); encoder.Commit(); var outputArray = outputStream.ToArray(); outputStream.Close(); That byte array can then be sent to the output stream and to the cache file. Once we’ve gone through this exercise, it’s only natural to wonder whether it was worth the trouble. I ran this method, as well as GDI and WPF resizing over thirty twelve megapixel images for JPEG qualities between 70% and 100% and measured the file size and time to resize. Here are the results: Size of resized images   Time to resize thirty 12 megapixel images Not much to see on the size graph: sizes from WPF and WIC are equivalent, which is hardly surprising as WPF calls into WIC. There is just an anomaly for 75% for WPF that I noted in my previous article and that disappears when using WIC directly. But overall, using WPF or WIC over GDI represents a slight win in file size. The time to resize is more interesting. WPF and WIC get similar times although WIC seems to always be a little faster. Not surprising considering WPF is using WIC. The margin of error on this results is probably fairly close to the time difference. As we already knew, the time to resize does not depend on the quality level, only the size does. This means that the only decision you have to make here is size versus visual quality. This third approach to server-side image resizing on ASP.NET seems to converge on the fastest possible one. We have marginally better performance than WPF, but with some additional peace of mind that this approach is sanctioned for server-side usage by the Windows Imaging team. It still doesn’t work in medium trust. That is a problem and shows the way for future server-friendly managed wrappers around WIC. The sample code for this article can be downloaded from: http://weblogs.asp.net/blogs/bleroy/Samples/WicResize.zip The benchmark code can be found here (you’ll need to add your own images to the Images directory and then add those to the project, with content and copy if newer in the properties of the files in the solution explorer): http://weblogs.asp.net/blogs/bleroy/Samples/WicWpfGdiImageResizeBenchmark.zip WIC tools can be downloaded from: http://code.msdn.microsoft.com/wictools To conclude, here are some of the resized thumbnails at 85% fant:

    Read the article

  • Silverlight Cream for June 17, 2010 -- #885

    - by Dave Campbell
    In this Issue: Zoltan Arvai, Antoni Dol, Jeff Prosise, David Anson, and John Papa. Shoutouts: Rob Davis has a World Cup Football Stadium tour in Silverlight, Azure, and Bing Maps up: The World Cup Map... cruise around this... tons of features. The Silverlight Team Blog reports that NBC sports is streaming the US Open in Silverlight Adam Kinney announced Expression Studio 4 Launch keynote videos are available From SilverlightCream.com: Data Driven Applications with MVVM Part III: Validation, Bringing the UI Closer Zoltan Arvai's 3rd (and final) part of the Data-Driven MVVM apps is up at SilverlightShow. In this final section he is focusing on validation, and discussion of closer integration to the view. Focus on FocusVisualElement in Silverlight buttons Antoni Dol has a cool post up about the FocusVisualElement, and uses a button to demonstrate how it can be used. Dynamic XAP Discovery with Silverlight MEF Jeff Prosise is discussing Silverlight and MEF ... but better than the normal loading XAP files ... he's doing dynamic discovery of XAP files ... and makes it look easy! Updated analysis of two ways to create a full-size Popup in Silverlight David Anson revisits a prior post with an eye toward Silverlight 4. The feature he's discussing is that you can now hook the Resized event without having browser zoom disabled... and he demonstrates it's use in the code from the old post. Silverlight as a Transmedia Platform (Silverlight TV #33) Jesse Liberty joins John Papa this week with Silverlight TV #33, discussing Transmedia and Silverlight as a Transmedia Platform. Stay in the 'Light! Twitter SilverlightNews | Twitter WynApse | WynApse.com | Tagged Posts | SilverlightCream Join me @ SilverlightCream | Phoenix Silverlight User Group Technorati Tags: Silverlight    Silverlight 3    Silverlight 4    Windows Phone MIX10

    Read the article

  • Autoscaling in a modern world&hellip;. Part 1

    - by Steve Loethen
    It has been a while since I have had time to sit down and blog.  I need to make sure I take the time.  It helps me to focus on technology and not let the administrivia keep me from doing the things I love. I have been focusing on the cloud for the last couple of years.  Specifically the  PaaS platform from Microsoft called Azure.  Time to dig in.. I wanted to explore Autoscaling.  Autoscaling is not native part of Azure.  The platform has the needed connection points.  You can write code that looks at the health and performance of your application components and react to needed scaling changes.  But that means you have to write all the code.  Luckily, an add on to the Enterprise Library provides a lot of code that gets you a long way to being able to autoscale without having to start from scratch. The tool set is primarily composed of a Autoscaler object that you need to host.  This object, when hosted and configured, looks at the performance criteria you specify and adjusts your application based on your needs.  Sounds perfect. I started with the a set of HOL’s that gave me a good basis to understand the mechanics.  I worked through labs 1 and 2 just to get the feel, but let’s start our saga at the end of lab3.  Lab3 end results in a web application, hosted in Azure and a console app running on premise.  The web app has a few buttons on it.  One set adds messages to a queue, another removes them.  A second set of buttons drives processor utilization to 100%.  If you want to guess, a safe bet is that the Autoscaler is configured to react to a queue that has filled up or high cpu usage.  We will continue our saga in the next post…

    Read the article

  • SQL SERVER – 2014 CTP1 Available for Download – SQL SERVER 2014 Community Technology Preview 1

    - by Pinal Dave
    Microsoft announced that SQL Server 2014 CTP 1 available to download at TechEd Europe. You can download SQL Server 2014 CTP1 from here. Additionally, there is in depth documentation of the product in the Product Guide over here. In this blog post I have in depth discussed what are the salient features which I was looking forward in the new version. Always On supports now 8 secondaries instead of 4 Online Indexing at partition level – this is a good thing as now index rebuilding can be done at a partition level Statistics at the partition level – this will be a huge improvement in performance In-Memory OLTP works by providing in-application memory storage for the most often used tables in SQL Server. Columnstore Index can be updated – I just can’t wait for this feature (Columnstore Index) Resource Governor can control IO along with CPU and Memory Increase performance by extending SQL Server in-memory buffer pool to SSDs Backup to Azure Storage You can read about the new features of the SQL Server 2014 in the following links: What’s New (Database Engine) What’s New in Analysis Services and Business Intelligence What’s New (Integration Services) What’s New (Replication) What’s New (Reporting Services) Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: PostADay, SQL, SQL Authority, SQL Query, SQL Server, SQL Service Pack, SQL Tips and Tricks, T SQL, Technology Tagged: CTP

    Read the article

  • How to Test and Deploy Applications Faster

    - by rickramsey
    photo courtesy of mtoleric via Flickr If you want to test and deploy your applications much faster than you could before, take a look at these OTN resources. They won't disappoint. Developer Webinar: How to Test and Deploy Applications Faster - April 10 Our second developer webinar, conducted by engineers Eric Reid and Stephan Schneider, will focus on how the zones and ZFS filesystem in Oracle Solaris 11 can simplify your development environment. This is a cool topic because it will show you how to test and deploy apps in their likely real-world environments much quicker than you could before. April 10 at 9:00 am PT Video Interview: Tips for Developing Faster Applications with Oracle Solaris 11 Express We recorded this a while ago, and it talks about the Express version of Oracle Solaris 11, but most of it applies to the production release. George Drapeau, who manages a group of engineers whose sole mission is to help customers develop better, faster applications for Oracle Solaris, shares some tips and tricks for improving your applications. How ZFS and Zones create the perfect developer sandbox. What's the best way for a developer to use DTrace. How Crossbow's network bandwidth controls can improve an application's performance. To borrow the classic Ed Sullivan accolade, it's a "really good show." "White Paper: What's New For Application Developers Excellent in-depth analysis of exactly how the capabilities of Oracle Solaris 11 help you test and deploy applications faster. Covers the tools in Oracle Solaris Studio and what you can do with each of them, plus source code management, scripting, and shells. How to replicate your development, test, and production environments, and how to make sure your application runs as it should in those different environments. How to migrate Oracle Solaris 10 applications to Oracle Solaris 11. How to find and diagnose faults in your application. And lots, lots more. - Rick Website Newsletter Facebook Twitter

    Read the article

  • Java - System design with distributed Queues and Locks

    - by sunny
    Looking for inputs to evaluate a design for a system (java) which would have a distributed queue serving several (but not too many) nodes. These nodes would process objects present in the distributed queue and on occasion require a distributed lock across the cluster on an arbitrary (distributed) data structures. These (distributed) data structures could potentially lie in a distributed cache. Eliminating Terracotta (DSO),Hazelcast and Akka what could be alternative choices. Currently considering zookeeper as a distributed locking mechanism. Since the recommendation of a znode is not to exceed the 1M size , the understanding is that zookeeper should not be used a distributed queue. And also from Netflix curator tech note 4. So should a distributed cache, say like memcached, or redis be used to emulate a distributed queue ? i.e. The distributed queue will be stored in the caches and will be locked cluster-wide via zookeeper. Are there potential pitfalls with this high-level approach. The objects don't need to be taken off the queue. The object will pass through a lifecycle which will determine its removal from the queue. There would be about 10k+ objects in a queue at a given time changing states and any node could service one stage of the object's lifecycle. (Although not strictly necessary .. i.e. one node could serve the entire lifecycle if that is more efficient.) Any suggestions/alternatives ? sidenote: new to zookeeper ; redis etc.

    Read the article

  • What are the disadvantages of automated testing?

    - by jkohlhepp
    There are a number of questions on this site that give plenty of information about the benefits that can be gained from automated testing. But I didn't see anything that represented the other side of the coin: what are the disadvantages? Everything in life is a tradeoff and there are no silver bullets, so surely there must be some valid reasons not to do automated testing. What are they? Here's a few that I've come up with: Requires more initial developer time for a given feature Requires a higher skill level of team members Increase tooling needs (test runners, frameworks, etc.) Complex analysis required when a failed test in encountered - is this test obsolete due to my change or is it telling me I made a mistake? Edit I should say that I am a huge proponent of automated testing, and I'm not looking to be convinced to do it. I'm looking to understand what the disadvantages are so when I go to my company to make a case for it I don't look like I'm throwing around the next imaginary silver bullet. Also, I'm explicity not looking for someone to dispute my examples above. I am taking as true that there must be some disadvantages (everything has trade-offs) and I want to understand what those are.

    Read the article

  • CRM@Oracle Series: CRM Analytics

    - by tony.berk
    What is the most important factor that leads to a successful CRM deployment? Is it the overall strategy, strong governance, defined processes or good data quality? Well, it's definitely a combination of all these, but the most important differentiator from our experience is Business Intelligence. Business Intelligence or Analytics is commonly mentioned as a key aspect to successful CRM and other enterprise deployments. The good news is that Oracle provides pre-built analytics dashboards, which provide real-time, actionable insight, and tools to build custom analyses. However, success with analytics, especially in a large enterprise, still requires a strong strategy, clean data for analysis, and performance. Today's CRM@Oracle slidecast covers Oracle's strategy, architecture and key success factors for deploying CRM Analytics internally at Oracle. CRM@Oracle: CRM Analytics Click here to learn more about Oracle CRM products and here to learn about Oracle Business Intelligence Applications. Have you read our other postings in the CRM@Oracle Series? If you have a particular CRM area or function which you'd like to hear how Oracle implemented it internally, post a comment and we'll get it on our list.

    Read the article

  • BI Survey 14

    - by Darren Gosbell
    Originally posted on: http://geekswithblogs.net/darrengosbell/archive/2014/05/23/bi-survey-14.aspxIt's BI Survey time again :) If you haven't done this before here is a little background on it from the guys that run it: The BI Survey, published by BARC, is the world's largest and most comprehensive annual survey of the real world experiences of business intelligence software users. Now in its fourteenth year, The BI Survey regularly attracts around 3000 responses from a global audience. It provides an invaluable resource to companies deciding which software to select and to vendors who want to understand the needs of the market. The Survey is funded by its readers, not by the participant vendors. As with the previous thirteen editions, no vendors have been involved in any way with the formulation of The BI Survey. Unlike most other surveys, it is not commissioned, sponsored or influenced by vendors. Here is a link to the survey: https://digiumenterprise.com/answer/?link=1981-ZYQSEY8B If you take the survey you will get access to a summary of the results. By helping to promote the survey here I'll get access to some more detailed results including some country specific analysis so it will be interesting to see the results.

    Read the article

  • indirect rendering issue on 12.04, using ati driver

    - by lurscher
    I have ubuntu 12.04 64-bit system, when i run glxinfo i see some strange error about indirect rendering and failed to load some lib32/dri/swrast_dri libraries. Any idea what is going on? please let me know if i can enhance the relevant information provided in this question $ LIBGL_DEBUG=verbose glxinfo name of display: :0 libGL: screen 0 does not appear to be DRI2 capable libGL: OpenDriver: trying /usr/lib32/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib32/dri/swrast_dri.so libGL error: dlopen /usr/lib32/dri/swrast_dri.so failed (/usr/lib32/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL: OpenDriver: trying /usr/lib/dri/tls/swrast_dri.so libGL: OpenDriver: trying /usr/lib/dri/swrast_dri.so libGL error: dlopen /usr/lib/dri/swrast_dri.so failed (/usr/lib/dri/swrast_dri.so: cannot open shared object file: No such file or directory) libGL error: unable to load driver: swrast_dri.so libGL error: reverting to indirect rendering display: :0 screen: 0 direct rendering: No (If you want to find out why, try setting LIBGL_DEBUG=verbose) server glx vendor string: ATI server glx version string: 1.4 server glx extensions: GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_texture_from_pixmap, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group client glx vendor string: Mesa Project and SGI client glx version string: 1.4 client glx extensions: GLX_ARB_create_context, GLX_ARB_create_context_profile, GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_EXT_framebuffer_sRGB, GLX_EXT_create_context_es2_profile, GLX_MESA_copy_sub_buffer, GLX_MESA_multithread_makecurrent, GLX_MESA_swap_control, GLX_OML_swap_method, GLX_OML_sync_control, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGI_video_sync, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap, GLX_INTEL_swap_event GLX version: 1.4 GLX extensions: GLX_ARB_get_proc_address, GLX_ARB_multisample, GLX_EXT_import_context, GLX_EXT_visual_info, GLX_EXT_visual_rating, GLX_MESA_multithread_makecurrent, GLX_OML_swap_method, GLX_SGI_make_current_read, GLX_SGI_swap_control, GLX_SGIS_multisample, GLX_SGIX_fbconfig, GLX_SGIX_pbuffer, GLX_SGIX_visual_select_group, GLX_EXT_texture_from_pixmap OpenGL vendor string: ATI Technologies Inc. OpenGL renderer string: AMD Radeon HD 6800 Series OpenGL version string: 1.4 (2.1 (4.2.11762 Compatibility Profile Context)) OpenGL extensions: GL_ARB_depth_texture, GL_ARB_draw_buffers, GL_ARB_fragment_program, GL_ARB_fragment_program_shadow, GL_ARB_framebuffer_object, GL_ARB_imaging, GL_ARB_multisample, GL_ARB_multitexture, GL_ARB_occlusion_query, GL_ARB_point_parameters, GL_ARB_point_sprite, GL_ARB_shadow, GL_ARB_shadow_ambient, GL_ARB_texture_border_clamp, GL_ARB_texture_compression, GL_ARB_texture_cube_map, GL_ARB_texture_env_add, GL_ARB_texture_env_combine, GL_ARB_texture_env_crossbar, GL_ARB_texture_env_dot3, GL_ARB_texture_mirrored_repeat, GL_ARB_texture_non_power_of_two, GL_ARB_texture_rectangle, GL_ARB_transpose_matrix, GL_ARB_vertex_program, GL_ARB_window_pos, GL_EXT_abgr, GL_EXT_bgra, GL_EXT_blend_color, GL_EXT_blend_equation_separate, GL_EXT_blend_func_separate, GL_EXT_blend_minmax, GL_EXT_blend_subtract, GL_EXT_copy_texture, GL_EXT_draw_range_elements, GL_EXT_fog_coord, GL_EXT_framebuffer_blit, GL_EXT_framebuffer_multisample, GL_EXT_framebuffer_object, GL_EXT_multi_draw_arrays, GL_EXT_packed_pixels, GL_EXT_point_parameters, GL_EXT_rescale_normal, GL_EXT_secondary_color, GL_EXT_separate_specular_color, GL_EXT_shadow_funcs, GL_EXT_stencil_wrap, GL_EXT_subtexture, GL_EXT_texture3D, GL_EXT_texture_compression_s3tc, GL_EXT_texture_edge_clamp, GL_EXT_texture_env_add, GL_EXT_texture_env_combine, GL_EXT_texture_env_dot3, GL_EXT_texture_lod, GL_EXT_texture_lod_bias, GL_EXT_texture_mirror_clamp, GL_EXT_texture_object, GL_EXT_texture_rectangle, GL_EXT_vertex_array, GL_ATI_draw_buffers, GL_ATI_texture_env_combine3, GL_ATI_texture_mirror_once, GL_ATIX_texture_env_combine3, GL_IBM_texture_mirrored_repeat, GL_INGR_blend_func_separate, GL_NV_texture_rectangle, GL_SGIS_generate_mipmap, GL_SGIS_texture_border_clamp, GL_SGIS_texture_edge_clamp, GL_SGIS_texture_lod, GL_SGIX_shadow_ambient, GL_SUN_multi_draw_arrays

    Read the article

  • Do I need path finding to make AI avoid obstacles?

    - by yannicuLar
    How do you know when a path-finding algorithm is really needed? There are contexts, where you just want to improve AI navigation to avoid an object, like a space -ship that won't crash on a planet or a car that already knows where to steer, but needs small corrections to avoid a road bump. As I've seen on similar posts, the obvious solution is to implement some path-finding algorithm, most likely like A*, and let your AI-controlled object to navigate through the path. Now, I have the necessary skills to implement a path-finding algorithm, and I'm not being lazy here, but I'm still a bit skeptical on if this is really needed. I have the impression that path-finding is appropriate to navigate through a maze, or picking a path when there are many alternatives. But in obstacle avoidance, when you do know the path, but need to make slight corrections, is path finding really necessary? Even when the obstacles are too sparse or small ? I mean, in real life, when you're driving and notice a bump on the road, you will just have to pick between steering a bit on the left (and have the bump on your right side) or the other way around. You will not consider stopping, or going backwards. A path finding would be appropriate when you need to pick a route through the city, right ? So, are there any other methods to help AI navigation, except path-finding? And if there are, how do you know when path-fining is the appropriate algorithm ? Thanks for any thoughts

    Read the article

  • CodePlex Daily Summary for Sunday, March 21, 2010

    CodePlex Daily Summary for Sunday, March 21, 2010New ProjectsAdaptCMS: AdaptCMS is an open source CMS that is made for complete control of your website, easiness of use and easily adaptable to any type of website. It's...Aura: Aura is a application that calculates average color of desktop background image or whole screen and sets it as Aero Glass color.Boxee Launcher: Boxee Launcher is a simple Windows Media Center add-in that attempts to launch Boxee and manage the windows as seamlessly as possible.ClothingSMS: ClothingSMSEasySL3ColorPicker: Silverlight 3 color picker user control.Fluent Moq Builder: Gives a fluent interface to help with building complex mock objects with Moq, such as mocks of HttpContextBase for help in unit testing ASP.NET MVC.Folder Bookmarks: A simple folder/file bookmark manager. Simple and easy to use. No need to navigate large folder directories, just double-click the bookmark to open...GeocodeThe.Net: GeoCodeThe.Net seeks to promote geographic tagging for all content on the web. It is our belief that anything on the web can and should be geocoded...GNF: GNF is a open source WPF based GPS library controlsHKGolden Express: HKGolden Express is a web application to generate simplified layout of HKGolden forum. HKGolden Express is written in Java EE, it can be deployed o...Informant: Informant provides people with the desire to send mass SMS to specific groups with the ability to do so using Google Voice. Included with Informant...JSON Object Serializer .Net 1.1: JSON serializer is used to created javascript object notation strings. It was written in the .NET 1.1 framework, and has capabilities of serializ...LightWeight Application Server: LWAS aims to provide the means for non-technical developers using just a web browser to create data-centered applications from user interface desig...MicroHedge: Quant FiNerd Clock: NerdClock is my windows phone 7 test app. A clock for nerds, time reads in binary.PhotoHelper: PhotoHelper makes it easier to organize the photoes, if your photoes are put into different locations, but you think they are the same category, yo...Pylor: An ASP.NET MVC custom attribute that allows the configuration of role based security access rules with similar functionality to the System.Web.Mvc....radiogaga: Access an online data source of internet streaming media and present it using a mixed paradigm of embedded web browser and rich client functionalit...Register WCF LOB Adapter MSBuild Task: If you would like to use MSBuild to register a WCF LOB Adapter in a given server, the custom tasks: RegisterWCFLOBAdapter and UnregisterWCFLOBAdapt...Restart Explorer: Utility to start, stop and restart Windows Explorer.Silverlight 4 Netflix Browser: Demonstrates using a WCF Data Client in Silverlight 4 to browse movie titles with the Netflix OData API announced at MIX 10.trayTwittr: With trayTwittr you can easily update your Twitterstatus right from the Systray. The GUI is designed like the Notificationpanels in Windows 7 (e.g....Warensoft Socket Server: Warensoft Socket Server is a solo server, which never cares about the real logical business. While you could process your socket message with IronP...Weka - Message Serialization: Message serialization framework for .net, including Compact Framework.New Releases[Tool] Vczh Visual Studio UnitTest Coverage Analyzer: Coverage Analyzer (beta): Done features: Load Visual Studio Code Coverage XML File (get this file by clicking "Export Results" in "Test->Windows->Code Coverage Results" in V...Aura: Aura Beta 1: Initial releaseBoxee Launcher: BoxeeLauncher Release 1.0.1.0: BoxeeLauncher Release 1.0.1.0 is the initial, barely-tested release of this Windows Media Center add-in. It should work in Vista Media Center and 7...Controlled Vocabulary: 1.0.0.2: System Requirements Outlook 2007 / 2010 .Net Framework 3.5 Installation 1. Close Outlook (Use Task Manager to ensure no running instances in the b...CycleMania Starter Kit EAP - ASP.NET 4 Problem - Design - Solution: Cyclemania 0.08.33: removed ASP.NET Menu from admin module applied security role filtering to Dashboard panels/tabsDDDSample.Net: 0.7: This is the next major release of DDDSample. This time I give you 4 different versions of the application: Classic (vanilla) with synchronous inter...DirectoryInfoEx: DirectoryInfoEx 0.16: 03-14-10 Version 0.13 o Fixed FileSystemWaterEx ignore remove directory event. o Fixed Removed IDisposable ...Employee Scheduler: Employee Scheduler [2.6]: Fixed clear data methods to handle holiday modification Added buttons to allow holiday and add time exceptions Updated drag/drop and resize of holi...Enovatics Foundation Library: Enovatics Foundation Library V1.4: This version provides the following components : Strongly Typed cache management, CSV management Base classes for SQL Server data access laye...Fluent Moq Builder: Version 0.1: Intial release. Contains (incomplete) builders for HttpRequestBase, HttpContextBase and ControllerContext. Mock methods so far focus around request...Folder Bookmarks: Folder Bookmarks 1.4: This is the latest version of Folder Bookmarks (1.4). It has an installer - it will create a directory 'CPascoe' in My Documents. Once you have ex...Folder Bookmarks: Source Code: This has the all the code for Folder Bookmarks in a Text file.Genesis Smart Client Framework: Genesis Smart Client Framework v1.60.1024.1: This release features the first installer for the Genesis Smart Client Framework. The installer creates the database and set's up the Internet Info...HKGolden Express: HKGoldenExpress (Build 201003201725): New features: (None) Bug fix: (None) Improvements: Added <meta> tag to optimize screen layout for different screen size. Added drop-down li...Home Access Plus+: v3.1.5.0: Version 3.1.5.0 Release Change Log: Fixed CSS for My Computer in List View Ability to remember which view mode you have selected Added HA+ home...IT Tycoon: IT Tycoon 0.2.0: Started refactoring toward more formatted and documented code and XAML.JSON Object Serializer .Net 1.1: jsonSerializer: Basic jsonSerializer binary. Now only handles an object model using reflection. There's no optimization added to the codebase handling .Net Refle...LightWeight Application Server: 0.4.0: 2010-03-20 lwas 0.4.0 This release is intended for c# developers audience only. Developed with MS vWD 2008 Express with .NET 3.5 and writen in c#....Microsoft Dynamics CRM 4.0 Marketing List Member Importer: Nocelab ExcelAddin - Release 2.0: Release note: - new installation procedure - fix some bugs related with the import procedure - errors during the import are displayed in red bold ...MSBuild Mercurial Tasks: 0.2.1 Stable: This release realises the Scenario 2 and provides two MSBuild tasks: HgCommit and HgPush. This task allows to create a new changeset in the current...NetSockets: NetBox (Example): Example application using the NetSockets library.NetSockets: NetSockets: The NetSockets library (DLL)Open Dotnet CMS: Open Dotnet CMS 1.6.2: This release update Open Dotnet CMS Console which now uses the modulare client application framework provided by Viking.Windows.Form library. The ...Open Portal Foundation: Open Portal Foundation V1.4: This release updates templates and theming library, and templates are now thematizable. This release also provides a better sample site and online ...PHPWord: PHPWord 0.6.0 Beta: Changelog: Added support for hyperlinks (inserting and formatting) Added support for GD images (created by PHP) Added support for Full-Table-St...Plurk.NET API and Client Applications: Plurk API Component: Plurk API Component is a wrapper of Plurk HTTP API.Register WCF LOB Adapter MSBuild Task: Register WCF LOB Adapter MSBuild Task 1.0: Register WCF LOB Adapter MSBuild Task Version 1.0 For more information visit: http://whiteboardworks.com/2010/02/installing-wcf-lob-adapters-wit...SCSI Interface for Multimedia and Block Devices: Release 11 - Complete User-Friendly Burning App!: I made numerous internal and external changes in this release of the program, the most important ones of which are: An intermediate buffer to make ...SharePoint LogViewer: SharePoint LogViewer 1.5.2: This release has following improvements: Scroll position is maintained when log is refreshed Filtering/Sorting performance has been significantly ...ShellLight: ShellLight 0.2.0.0: This is the first feature complete and full functional version of ShellLight. It is still a very simple framework with a limited set of features, b...Silverlight Media Player (3.0): Silverlight Media Player v.02: Silverlight Media Player (2.0/3.0/4.0) major upgrade: initial settings and media elements are stored in external XML filesStardust: Stardust Binaries: Stardust BinariesToolkit.forasp.net Multipurpose Tools for Asp.net written in C#: Beta 1: Beta 1 of csToolkit.dllToolkitVB.net is a set of Multipurpose Tools for Asp.net written in VB: Beta 1: Beta 1 of ToolKitVB.dllTransparent Persistence.Net: TP.Net 0.1.1: This is a minor update that produces separate 2.0 and 3.5 builds. Additionally type to persistence store naming has been enhanced.VCC: Latest build, v2.1.30320.0: Automatic drop of latest buildVisual Studio DSite: Screen Capture Program (Visual C++ 2008): This screen capture program can capture the whole screen of your computer and save it in any picture format you want including gif, bmp, jpg and pn...WPF Dialogs: Version 0.1.3 for .Net 3.5: This is a special version of the "Version 0.1.3" for the .Net-framework 3.5. You can use these library like the .Net 4.0 version. The changes are o...Most Popular ProjectsMetaSharpSavvy DateTimeRawrWBFS ManagerSilverlight ToolkitASP.NET Ajax LibraryMicrosoft SQL Server Product Samples: DatabaseAJAX Control ToolkitLiveUpload to FacebookWindows Presentation Foundation (WPF)Most Active ProjectsLINQ to TwitterRawrOData SDK for PHPjQuery Library for SharePoint Web ServicesDirectQPHPExcelFarseer Physics Enginepatterns & practices – Enterprise LibraryBlogEngine.NETNB_Store - Free DotNetNuke Ecommerce Catalog Module

    Read the article

< Previous Page | 511 512 513 514 515 516 517 518 519 520 521 522  | Next Page >