Search Results

Search found 18665 results on 747 pages for 'inside red gate'.

Page 164/747 | < Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >

  • New Validation Attributes in ASP.NET MVC 3 Future

    - by imran_ku07
         Introduction:             Validating user inputs is an very important step in collecting information from users because it helps you to prevent errors during processing data. Incomplete or improperly formatted user inputs will create lot of problems for your application. Fortunately, ASP.NET MVC 3 makes it very easy to validate most common input validations. ASP.NET MVC 3 includes Required, StringLength, Range, RegularExpression, Compare and Remote validation attributes for common input validation scenarios. These validation attributes validates most of your user inputs but still validation for Email, File Extension, Credit Card, URL, etc are missing. Fortunately, some of these validation attributes are available in ASP.NET MVC 3 Future. In this article, I will show you how to leverage Email, Url, CreditCard and FileExtensions validation attributes(which are available in ASP.NET MVC 3 Future) in ASP.NET MVC 3 application.       Description:             First of all you need to download ASP.NET MVC 3 RTM Source Code from here. Then extract all files in a folder. Then open MvcFutures project from mvc3-rtm-sources\mvc3\src\MvcFutures folder. Build the project. In case, if you get compile time error(s) then simply remove the reference of System.Web.WebPages and System.Web.Mvc assemblies and add the reference of System.Web.WebPages and System.Web.Mvc 3 assemblies again but from the .NET tab and then build the project again, it will create a Microsoft.Web.Mvc assembly inside mvc3-rtm-sources\mvc3\src\MvcFutures\obj\Debug folder. Now we can use Microsoft.Web.Mvc assembly inside our application.             Create a new ASP.NET MVC 3 application. For demonstration purpose, I will create a dummy model UserInformation. So create a new class file UserInformation.cs inside Model folder and add the following code,   public class UserInformation { [Required] public string Name { get; set; } [Required] [EmailAddress] public string Email { get; set; } [Required] [Url] public string Website { get; set; } [Required] [CreditCard] public string CreditCard { get; set; } [Required] [FileExtensions(Extensions = "jpg,jpeg")] public string Image { get; set; } }             Inside UserInformation class, I am using Email, Url, CreditCard and FileExtensions validation attributes which are defined in Microsoft.Web.Mvc assembly. By default FileExtensionsAttribute allows png, jpg, jpeg and gif extensions. You can override this by using Extensions property of FileExtensionsAttribute class.             Then just open(or create) HomeController.cs file and add the following code,   public class HomeController : Controller { public ActionResult Index() { return View(); } [HttpPost] public ActionResult Index(UserInformation u) { return View(); } }             Next just open(or create) Index view for Home controller and add the following code,  @model NewValidationAttributesinASPNETMVC3Future.Model.UserInformation @{ ViewBag.Title = "Index"; Layout = "~/Views/Shared/_Layout.cshtml"; } <h2>Index</h2> <script src="@Url.Content("~/Scripts/jquery.validate.min.js")" type="text/javascript"></script> <script src="@Url.Content("~/Scripts/jquery.validate.unobtrusive.min.js")" type="text/javascript"></script> @using (Html.BeginForm()) { @Html.ValidationSummary(true) <fieldset> <legend>UserInformation</legend> <div class="editor-label"> @Html.LabelFor(model => model.Name) </div> <div class="editor-field"> @Html.EditorFor(model => model.Name) @Html.ValidationMessageFor(model => model.Name) </div> <div class="editor-label"> @Html.LabelFor(model => model.Email) </div> <div class="editor-field"> @Html.EditorFor(model => model.Email) @Html.ValidationMessageFor(model => model.Email) </div> <div class="editor-label"> @Html.LabelFor(model => model.Website) </div> <div class="editor-field"> @Html.EditorFor(model => model.Website) @Html.ValidationMessageFor(model => model.Website) </div> <div class="editor-label"> @Html.LabelFor(model => model.CreditCard) </div> <div class="editor-field"> @Html.EditorFor(model => model.CreditCard) @Html.ValidationMessageFor(model => model.CreditCard) </div> <div class="editor-label"> @Html.LabelFor(model => model.Image) </div> <div class="editor-field"> @Html.EditorFor(model => model.Image) @Html.ValidationMessageFor(model => model.Image) </div> <p> <input type="submit" value="Save" /> </p> </fieldset> } <div> @Html.ActionLink("Back to List", "Index") </div>             Now just run your application. You will find that both client side and server side validation for the above validation attributes works smoothly.                      Summary:             Email, URL, Credit Card and File Extension input validations are very common. In this article, I showed you how you can validate these input validations into your application. I explained this with an example. I am also attaching a sample application which also includes Microsoft.Web.Mvc.dll. So you can add a reference of Microsoft.Web.Mvc assembly directly instead of doing any manual work. Hope you will enjoy this article too.   SyntaxHighlighter.all()

    Read the article

  • TechEd 2010 Day One – How I Travel

    - by BuckWoody
    Normally when I blog on the first day of a conference, well, there hasn’t been a first day yet. So I talk about the value of a conference or some other facet. And normally in my (non-conference) blogs, I show you how I have learned to be a data professional – things I’ve learned how to do over the years. But in all that time, I don’t think I’ve ever talked about a big part of my job – traveling. I’ve traveled a lot throughout the years, when I’ve taught, gone to conferences, consulted and in my current role assisting Microsoft customers with large-scale database system designs.  So I’ll share a few thoughts about what I do. Keep in mind that I travel for short durations, just a day or so, and sometimes I travel internationally. For those I prepare differently – what I’m talking about here is what I do for a multi-day, same-country trip. Hopefully you find it useful. I’ll tag a few other travelers I know to add their thoughts.  Preparing for Travel   When I’m notified of a trip, I begin researching the location. I find the flights, hotel and (if I have to) a car to use while I’m away. We have an in-house system we use to book the travel, but when I travel not-for-Microsoft I use Expedia and Kayak to find what I need.  Traveling on Sunday and Friday is the worst. I have to do it sometimes (like this week) and it’s always a bad idea. But you can blunt the impact by booking as early as you can stand it. That means I have to be up super-early, but the flights are normally on time. I stay flexible, and always have a backup plan in case the flights are delayed or canceled.  For the hotel, I tend to go on the cheaper side, and I look for older hotels that have been renovated, or quirky ones. For instance, in Boise, ID recently I stayed at a 60’s-themed (think Mad-Men) hotel that was very cool. Always I go on the less expensive side – I find the “luxury” hotels nail me for Internet, food, everything. The cheaper places include all kinds of things, and even have breakfasts, shuttles and all kinds of things that start to add up. I even call ahead to make sure there’s an iron and ironing board available, since I’ll need those when I get there.  I find any way I can not to get a car. I use mass-transit wherever possible, and try to make friends and pay their gas to take me places. In a pinch, I’ll use a taxi. It ends up being cheaper, faster, and less stressful all around.  Packing  Over the years I’ve learned never to check luggage whenever I can. To do that, I lay out everything I want to take with me on the bed, and then try and make sure I’m really going to use it. I wear a dark wool set of pants, which I can clean and wear in hot and cold climates. I bring undies and socks of course, and for most places I have to wear “dress up” shirts. I bring at least two print T-Shirts in case I want to dress down for something while I’m gone, but I only bring one set of shoes. All the  clothes are rolled as tightly as possible as I learned in the military. Then I use those to cushion the electronics I take.  For toiletries I bring a shaver, toothpaste and toothbrush, D/O and a small brush. Everything else the hotel will provide.  For entertainment, I take a small Zune, a full PC-Headset (so I can make IP calls on the road) and my laptop. I don’t take books or anything else – everything is electronic. I use E-books (downloaded from our Library), Audio-Books (on the Zune) and I also bring along a Kaossilator (more here) to play music in the hotel room or even on the plane without being heard.  If I can, I pack into one roll-on bag. There’s not a lot better than this one, but I also have a Bag I was given as a prize for something or other here at Microsoft. Either way, I like something with less pockets and more big, open compartments. Everything gets rolled up and packed in, with all of the wires and charges in small bags my wife made for me. The laptop (and anything I don’t want gate-checked) goes on top or in an outside pouch so I can grab it quickly if I have to gate-check the bag. As much as I can, I try to go in one bag. When I can’t (like this week) I use this bag since it can expand, roll up, crush and even be put away later. It’s super-heavy canvas and worth the price. This allows me to not check a bag.  Journey Logistics The day of the trip, I have everything ready since I’m getting up early. I pack a few small snacks inside a plastic large-mouth water bottle, which protects the snacks and lets me get water in the terminal. I bring along those little powdered drink mixes to add to the water.  At the airport, I make a beeline for the power-outlets. I charge up my laptop and phone, and download all my e-mails so I can work on them off-line in the air. I don’t travel as often as I used to – just every month or so now, so I don’t have a membership to an airline club. If I travel much more, I’ll invest in one again – they are WELL worth the money, for the wifi, food and quiet if for nothing else.  I print out my logistics on paper and put that in my pocket – flight numbers, hotel addresses and phones for everything. That way if I have to make a change, I don’t have to boot up anything or even have power to be able to roll with the punches if things change.  Working While Away  While I’m away I realize I’m going to be swamped with things at the conference or with my clients. So I turn on Out-Of-Office notifications to let people know I won’t be as responsive, and I keep my Outlook calendar up to date so my co-workers know what I’m up to. I even update it with hotel and phone info in case they really need to reach me. I share my calendar with my wife so my family knows what I’m doing as well.  I check my e-mail during breaks, but I only respond to them in the evening or early morning at the hotel. I tweet during conferences. The point is to be as present as possible during the event or when I’m at the clients. Both deserve it.  So those are my initial thoughts. I’ll tag Brent Ozar, Brad McGeHee and Paul Randal, and they can tag whomever they wish. Share this post: email it! | bookmark it! | digg it! | reddit! | kick it! | live it!

    Read the article

  • LLBLGen Pro feature highlights: grouping model elements

    - by FransBouma
    (This post is part of a series of posts about features of the LLBLGen Pro system) When working with an entity model which has more than a few entities, it's often convenient to be able to group entities together if they belong to a semantic sub-model. For example, if your entity model has several entities which are about 'security', it would be practical to group them together under the 'security' moniker. This way, you could easily find them back, yet they can be left inside the complete entity model altogether so their relationships with entities outside the group are kept. In other situations your domain consists of semi-separate entity models which all target tables/views which are located in the same database. It then might be convenient to have a single project to manage the complete target database, yet have the entity models separate of each other and have them result in separate code bases. LLBLGen Pro can do both for you. This blog post will illustrate both situations. The feature is called group usage and is controllable through the project settings. This setting is supported on all supported O/R mapper frameworks. Situation one: grouping entities in a single model. This situation is common for entity models which are dense, so many relationships exist between all sub-models: you can't split them up easily into separate models (nor do you likely want to), however it's convenient to have them grouped together into groups inside the entity model at the project level. A typical example for this is the AdventureWorks example database for SQL Server. This database, which is a single catalog, has for each sub-group a schema, however most of these schemas are tightly connected with each other: adding all schemas together will give a model with entities which indirectly are related to all other entities. LLBLGen Pro's default setting for group usage is AsVisualGroupingMechanism which is what this situation is all about: we group the elements for visual purposes, it has no real meaning for the model nor the code generated. Let's reverse engineer AdventureWorks to an entity model. By default, LLBLGen Pro uses the target schema an element is in which is being reverse engineered, as the group it will be in. This is convenient if you already have categorized tables/views in schemas, like which is the case in AdventureWorks. Of course this can be switched off, or corrected on the fly. When reverse engineering, we'll walk through a wizard which will guide us with the selection of the elements which relational model data should be retrieved, which we can later on use to reverse engineer to an entity model. The first step after specifying which database server connect to is to select these elements. below we can see the AdventureWorks catalog as well as the different schemas it contains. We'll include all of them. After the wizard completes, we have all relational model data nicely in our catalog data, with schemas. So let's reverse engineer entities from the tables in these schemas. We select in the catalog explorer the schemas 'HumanResources', 'Person', 'Production', 'Purchasing' and 'Sales', then right-click one of them and from the context menu, we select Reverse engineer Tables to Entity Definitions.... This will bring up the dialog below. We check all checkboxes in one go by checking the checkbox at the top to mark them all to be added to the project. As you can see LLBLGen Pro has already filled in the group name based on the schema name, as this is the default and we didn't change the setting. If you want, you can select multiple rows at once and set the group name to something else using the controls on the dialog. We're fine with the group names chosen so we'll simply click Add to Project. This gives the following result:   (I collapsed the other groups to keep the picture small ;)). As you can see, the entities are now grouped. Just to see how dense this model is, I've expanded the relationships of Employee: As you can see, it has relationships with entities from three other groups than HumanResources. It's not doable to cut up this project into sub-models without duplicating the Employee entity in all those groups, so this model is better suited to be used as a single model resulting in a single code base, however it benefits greatly from having its entities grouped into separate groups at the project level, to make work done on the model easier. Now let's look at another situation, namely where we work with a single database while we want to have multiple models and for each model a separate code base. Situation two: grouping entities in separate models within the same project. To get rid of the entities to see the second situation in action, simply undo the reverse engineering action in the project. We still have the AdventureWorks relational model data in the catalog. To switch LLBLGen Pro to see each group in the project as a separate project, open the Project Settings, navigate to General and set Group usage to AsSeparateProjects. In the catalog explorer, select Person and Production, right-click them and select again Reverse engineer Tables to Entities.... Again check the checkbox at the top to mark all entities to be added and click Add to Project. We get two groups, as expected, however this time the groups are seen as separate projects. This means that the validation logic inside LLBLGen Pro will see it as an error if there's e.g. a relationship or an inheritance edge linking two groups together, as that would lead to a cyclic reference in the code bases. To see this variant of the grouping feature, seeing the groups as separate projects, in action, we'll generate code from the project with the two groups we just created: select from the main menu: Project -> Generate Source-code... (or press F7 ;)). In the dialog popping up, select the target .NET framework you want to use, the template preset, fill in a destination folder and click Start Generator (normal). This will start the code generator process. As expected the code generator has simply generated two code bases, one for Person and one for Production: The group name is used inside the namespace for the different elements. This allows you to add both code bases to a single solution and use them together in a different project without problems. Below is a snippet from the code file of a generated entity class. //... using System.Xml.Serialization; using AdventureWorks.Person; using AdventureWorks.Person.HelperClasses; using AdventureWorks.Person.FactoryClasses; using AdventureWorks.Person.RelationClasses; using SD.LLBLGen.Pro.ORMSupportClasses; namespace AdventureWorks.Person.EntityClasses { //... /// <summary>Entity class which represents the entity 'Address'.<br/><br/></summary> [Serializable] public partial class AddressEntity : CommonEntityBase //... The advantage of this is that you can have two code bases and work with them separately, yet have a single target database and maintain everything in a single location. If you decide to move to a single code base, you can do so with a change of one setting. It's also useful if you want to keep the groups as separate models (and code bases) yet want to add relationships to elements from another group using a copy of the entity: you can simply reverse engineer the target table to a new entity into a different group, effectively making a copy of the entity. As there's a single target database, changes made to that database are reflected in both models which makes maintenance easier than when you'd have a separate project for each group, with its own relational model data. Conclusion LLBLGen Pro offers a flexible way to work with entities in sub-models and control how the sub-models end up in the generated code.

    Read the article

  • Anatomy of a .NET Assembly - CLR metadata 1

    - by Simon Cooper
    Before we look at the bytes comprising the CLR-specific data inside an assembly, we first need to understand the logical format of the metadata (For this post I only be looking at simple pure-IL assemblies; mixed-mode assemblies & other things complicates things quite a bit). Metadata streams Most of the CLR-specific data inside an assembly is inside one of 5 streams, which are analogous to the sections in a PE file. The name of each section in a PE file starts with a ., and the name of each stream in the CLR metadata starts with a #. All but one of the streams are heaps, which store unstructured binary data. The predefined streams are: #~ Also called the metadata stream, this stream stores all the information on the types, methods, fields, properties and events in the assembly. Unlike the other streams, the metadata stream has predefined contents & structure. #Strings This heap is where all the namespace, type & member names are stored. It is referenced extensively from the #~ stream, as we'll be looking at later. #US Also known as the user string heap, this stream stores all the strings used in code directly. All the strings you embed in your source code end up in here. This stream is only referenced from method bodies. #GUID This heap exclusively stores GUIDs used throughout the assembly. #Blob This heap is for storing pure binary data - method signatures, generic instantiations, that sort of thing. Items inside the heaps (#Strings, #US, #GUID and #Blob) are indexed using a simple binary offset from the start of the heap. At that offset is a coded integer giving the length of that item, then the item's bytes immediately follow. The #GUID stream is slightly different, in that GUIDs are all 16 bytes long, so a length isn't required. Metadata tables The #~ stream contains all the assembly metadata. The metadata is organised into 45 tables, which are binary arrays of predefined structures containing information on various aspects of the metadata. Each entry in a table is called a row, and the rows are simply concatentated together in the file on disk. For example, each row in the TypeRef table contains: A reference to where the type is defined (most of the time, a row in the AssemblyRef table). An offset into the #Strings heap with the name of the type An offset into the #Strings heap with the namespace of the type. in that order. The important tables are (with their table number in hex): 0x2: TypeDef 0x4: FieldDef 0x6: MethodDef 0x14: EventDef 0x17: PropertyDef Contains basic information on all the types, fields, methods, events and properties defined in the assembly. 0x1: TypeRef The details of all the referenced types defined in other assemblies. 0xa: MemberRef The details of all the referenced members of types defined in other assemblies. 0x9: InterfaceImpl Links the types defined in the assembly with the interfaces that type implements. 0xc: CustomAttribute Contains information on all the attributes applied to elements in this assembly, from method parameters to the assembly itself. 0x18: MethodSemantics Links properties and events with the methods that comprise the get/set or add/remove methods of the property or method. 0x1b: TypeSpec 0x2b: MethodSpec These tables provide instantiations of generic types and methods for each usage within the assembly. There are several ways to reference a single row within a table. The simplest is to simply specify the 1-based row index (RID). The indexes are 1-based so a value of 0 can represent 'null'. In this case, which table the row index refers to is inferred from the context. If the table can't be determined from the context, then a particular row is specified using a token. This is a 4-byte value with the most significant byte specifying the table, and the other 3 specifying the 1-based RID within that table. This is generally how a metadata table row is referenced from the instruction stream in method bodies. The third way is to use a coded token, which we will look at in the next post. So, back to the bytes Now we've got a rough idea of how the metadata is logically arranged, we can now look at the bytes comprising the start of the CLR data within an assembly: The first 8 bytes of the .text section are used by the CLR loader stub. After that, the CLR-specific data starts with the CLI header. I've highlighted the important bytes in the diagram. In order, they are: The size of the header. As the header is a fixed size, this is always 0x48. The CLR major version. This is always 2, even for .NET 4 assemblies. The CLR minor version. This is always 5, even for .NET 4 assemblies, and seems to be ignored by the runtime. The RVA and size of the metadata header. In the diagram, the RVA 0x20e4 corresponds to the file offset 0x2e4 Various flags specifying if this assembly is pure-IL, whether it is strong name signed, and whether it should be run as 32-bit (this is how the CLR differentiates between x86 and AnyCPU assemblies). A token pointing to the entrypoint of the assembly. In this case, 06 (the last byte) refers to the MethodDef table, and 01 00 00 refers to to the first row in that table. (after a gap) RVA of the strong name signature hash, which comes straight after the CLI header. The RVA 0x2050 corresponds to file offset 0x250. The rest of the CLI header is mainly used in mixed-mode assemblies, and so is zeroed in this pure-IL assembly. After the CLI header comes the strong name hash, which is a SHA-1 hash of the assembly using the strong name key. After that comes the bodies of all the methods in the assembly concatentated together. Each method body starts off with a header, which I'll be looking at later. As you can see, this is a very small assembly with only 2 methods (an instance constructor and a Main method). After that, near the end of the .text section, comes the metadata, containing a metadata header and the 5 streams discussed above. We'll be looking at this in the next post. Conclusion The CLI header data doesn't have much to it, but we've covered some concepts that will be important in later posts - the logical structure of the CLR metadata and the overall layout of CLR data within the .text section. Next, I'll have a look at the contents of the #~ stream, and how the table data is arranged on disk.

    Read the article

  • Automating deployments with the SQL Compare command line

    - by Jonathan Hickford
    In my previous article, “Five Tips to Get Your Organisation Releasing Software Frequently” I looked at how teams can automate processes to speed up release frequency. In this post, I’m looking specifically at automating deployments using the SQL Compare command line. SQL Compare compares SQL Server schemas and deploys the differences. It works very effectively in scenarios where only one deployment target is required – source and target databases are specified, compared, and a change script is automatically generated and applied. But if multiple targets exist, and pressure to increase the frequency of releases builds, this solution quickly becomes unwieldy.   This is where SQL Compare’s command line comes into its own. I’ve put together a PowerShell script that loops through the Servers table and pulls out the server and database, these are then passed to sqlcompare.exe to be used as target parameters. In the example the source database is a scripts folder, a folder structure of scripted-out database objects used by both SQL Source Control and SQL Compare. The script can easily be adapted to use schema snapshots.     -- Create a DeploymentTargets database and a Servers table CREATE DATABASE DeploymentTargets GO USE DeploymentTargets GO CREATE TABLE [dbo].[Servers]( [id] [int] IDENTITY(1,1) NOT NULL, [serverName] [nvarchar](50) NULL, [environment] [nvarchar](50) NULL, [databaseName] [nvarchar](50) NULL, CONSTRAINT [PK_Servers] PRIMARY KEY CLUSTERED ([id] ASC) ) GO -- Now insert your target server and database details INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment1' , N'mydb1') INSERT INTO dbo.Servers ( serverName , environment , databaseName) VALUES ( N'myserverinstance' , N'myenvironment2' , N'mydb2') Here’s the PowerShell script you can adapt for yourself as well. # We're holding the server names and database names that we want to deploy to in a database table. # We need to connect to that server to read these details $serverName = "" $databaseName = "DeploymentTargets" $authentication = "Integrated Security=SSPI" #$authentication = "User Id=xxx;PWD=xxx" # If you are using database authentication instead of Windows authentication. # Path to the scripts folder we want to deploy to the databases $scriptsPath = "SimpleTalk" # Path to SQLCompare.exe $SQLComparePath = "C:\Program Files (x86)\Red Gate\SQL Compare 10\sqlcompare.exe" # Create SQL connection string, and connection $ServerConnectionString = "Data Source=$serverName;Initial Catalog=$databaseName;$authentication" $ServerConnection = new-object system.data.SqlClient.SqlConnection($ServerConnectionString); # Create a Dataset to hold the DataTable $dataSet = new-object "System.Data.DataSet" "ServerList" # Create a query $query = "SET NOCOUNT ON;" $query += "SELECT serverName, environment, databaseName " $query += "FROM dbo.Servers; " # Create a DataAdapter to populate the DataSet with the results $dataAdapter = new-object "System.Data.SqlClient.SqlDataAdapter" ($query, $ServerConnection) $dataAdapter.Fill($dataSet) | Out-Null # Close the connection $ServerConnection.Close() # Populate the DataTable $dataTable = new-object "System.Data.DataTable" "Servers" $dataTable = $dataSet.Tables[0] #For every row in the DataTable $dataTable | FOREACH-OBJECT { "Server Name: $($_.serverName)" "Database Name: $($_.databaseName)" "Environment: $($_.environment)" # Compare the scripts folder to the database and synchronize the database to match # NB. Have set SQL Compare to abort on medium level warnings. $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/AbortOnWarnings:Medium") # + @("/sync" ) # Commented out the 'sync' parameter for safety, write-host $arguments & $SQLComparePath $arguments "Exit Code: $LASTEXITCODE" # Some interesting variations # Check that every database matches a folder. # For example this might be a pre-deployment step to validate everything is at the same baseline state. # Or a post deployment script to validate the deployment worked. # An exit code of 0 means the databases are identical. # # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") # Generate a report of the difference between the folder and each database. Generate a SQL update script for each database. # For example use this after the above to generate upgrade scripts for each database # Examine the warnings and the HTML diff report to understand how the script will change objects # #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") } It’s worth noting that the above example generates the deployment scripts dynamically. This approach should be problem-free for the vast majority of changes, but it is still good practice to review and test a pre-generated deployment script prior to deployment. An alternative approach would be to pre-generate a single deployment script using SQL Compare, and run this en masse to multiple targets programmatically using sqlcmd, or using a tool like SQL Multi Script.  You can use the /ScriptFile, /report, and /showWarnings flags to generate change scripts, difference reports and any warnings.  See the commented out example in the PowerShell: #$arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/ScriptFile:update_$($_.environment+"_"+$_.databaseName).sql", "/report:update_$($_.environment+"_"+$_.databaseName).html" , "/reportType:Interactive", "/showWarnings", "/include:Identical") There is a drawback of running a pre-generated deployment script; it assumes that a given database target hasn’t drifted from its expected state. Often there are (rightly or wrongly) many individuals within an organization who have permissions to alter the production database, and changes can therefore be made outside of the prescribed development processes. The consequence is that at deployment time, the applied script has been validated against a target that no longer represents reality. The solution here would be to add a check for drift prior to running the deployment script. This is achieved by using sqlcompare.exe to compare the target against the expected schema snapshot using the /Assertidentical flag. Should this return any differences (sqlcompare.exe Exit Code 79), a drift report is outputted instead of executing the deployment script.  See the commented out example. # $arguments = @("/scripts1:$($scriptsPath)", "/server2:$($_.serverName)", "/database2:$($_.databaseName)", "/Assertidentical") Any checks and processes that should be undertaken prior to a manual deployment, should also be happen during an automated deployment. You might think about triggering backups prior to deployment – even better, automate the verification of the backup too.   You can use SQL Compare’s command line interface along with PowerShell to automate multiple actions and checks that you need in your deployment process. Automation is a practical solution where multiple targets and a higher release cadence come into play. As we know, with great power comes great responsibility – responsibility to ensure that the necessary checks are made so deployments remain trouble-free.  (The code sample supplied in this post automates the simple dynamic deployment case – if you are considering more advanced automation, e.g. the drift checks, script generation, deploying to large numbers of targets and backup/verification, please email me at [email protected] for further script samples or if you have further questions)

    Read the article

  • mac, netbeans 6.8, c++, sdl, opengl: compilation problems

    - by ufk
    Hiya. I'm trying to properly compile a c++ opengl+sdl application using netbeans 6.8 under Snow Leopard 64-bit. I have libSDL 1.2.14 installed using macports. The script that I try to compile is the following: #ifdef WIN32 #define WIN32_LEAN_AND_MEAN #include <windows.h> #endif #if defined(__APPLE__) && defined(__MACH__) #include <OpenGL/gl.h> // Header File For The OpenGL32 Library #include <OpenGL/glu.h> // Header File For The GLu32 Library #else #include <GL/gl.h> // Header File For The OpenGL32 Library #include <GL/glu.h> // Header File For The GLu32 Library #endif #include "sdl/SDL.h" #include <stdio.h> #include <unistd.h> #include "SDL/SDL_main.h" SDL_Surface *screen=NULL; GLfloat rtri; // Angle For The Triangle ( NEW ) GLfloat rquad; // Angle For The Quad ( NEW ) void InitGL(int Width, int Height) // We call this right after our OpenGL window is created. { glViewport(0, 0, Width, Height); glClearColor(0.0f, 0.0f, 0.0f, 0.0f); // This Will Clear The Background Color To Black glClearDepth(1.0); // Enables Clearing Of The Depth Buffer glDepthFunc(GL_LESS); // The Type Of Depth Test To Do glEnable(GL_DEPTH_TEST); // Enables Depth Testing glShadeModel(GL_SMOOTH); // Enables Smooth Color Shading glMatrixMode(GL_PROJECTION); glLoadIdentity(); // Reset The Projection Matrix gluPerspective(45.0f,(GLfloat)Width/(GLfloat)Height,0.1f,100.0f); // Calculate The Aspect Ratio Of The Window glMatrixMode(GL_MODELVIEW); } /* The main drawing function. */ int DrawGLScene() { glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT); // Clear The Screen And The Depth Buffer glLoadIdentity(); // Reset The View glTranslatef(-1.5f,0.0f,-6.0f); // Move Left 1.5 Units And Into The Screen 6.0 glRotatef(rtri,0.0f,1.0f,0.0f); // Rotate The Triangle On The Y axis ( NEW ) // draw a triangle glBegin(GL_TRIANGLES); // Begin Drawing Triangles glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Front) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f(-1.0f,-1.0f, 1.0f); // Left Of Triangle (Front) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f( 1.0f,-1.0f, 1.0f); // Right Of Triangle (Front) glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Right) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f( 1.0f,-1.0f, 1.0f); // Left Of Triangle (Right) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f( 1.0f,-1.0f, -1.0f); // Right Of Triangle (Right) glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Back) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f( 1.0f,-1.0f, -1.0f); // Left Of Triangle (Back) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f(-1.0f,-1.0f, -1.0f); // Right Of Triangle (Back) glColor3f(1.0f,0.0f,0.0f); // Red glVertex3f( 0.0f, 1.0f, 0.0f); // Top Of Triangle (Left) glColor3f(0.0f,0.0f,1.0f); // Blue glVertex3f(-1.0f,-1.0f,-1.0f); // Left Of Triangle (Left) glColor3f(0.0f,1.0f,0.0f); // Green glVertex3f(-1.0f,-1.0f, 1.0f); // Right Of Triangle (Left) glEnd(); glLoadIdentity(); // Reset The Current Modelview Matrix glTranslatef(1.5f,0.0f,-7.0f); // Move Right 1.5 Units And Into The Screen 6.0 glRotatef(rquad,1.0f,0.0f,0.0f); // Rotate The Quad On The X axis ( NEW ) glBegin(GL_QUADS); // Start Drawing Quads glColor3f(0.0f,1.0f,0.0f); // Set The Color To Green glVertex3f( 1.0f, 1.0f,-1.0f); // Top Right Of The Quad (Top) glVertex3f(-1.0f, 1.0f,-1.0f); // Top Left Of The Quad (Top) glVertex3f(-1.0f, 1.0f, 1.0f); // Bottom Left Of The Quad (Top) glVertex3f( 1.0f, 1.0f, 1.0f); // Bottom Right Of The Quad (Top) glColor3f(1.0f,0.5f,0.0f); // Set The Color To Orange glVertex3f( 1.0f,-1.0f, 1.0f); // Top Right Of The Quad (Bottom) glVertex3f(-1.0f,-1.0f, 1.0f); // Top Left Of The Quad (Bottom) glVertex3f(-1.0f,-1.0f,-1.0f); // Bottom Left Of The Quad (Bottom) glVertex3f( 1.0f,-1.0f,-1.0f); // Bottom Right Of The Quad (Bottom) glColor3f(1.0f,0.0f,0.0f); // Set The Color To Red glVertex3f( 1.0f, 1.0f, 1.0f); // Top Right Of The Quad (Front) glVertex3f(-1.0f, 1.0f, 1.0f); // Top Left Of The Quad (Front) glVertex3f(-1.0f,-1.0f, 1.0f); // Bottom Left Of The Quad (Front) glVertex3f( 1.0f,-1.0f, 1.0f); // Bottom Right Of The Quad (Front) glColor3f(1.0f,1.0f,0.0f); // Set The Color To Yellow glVertex3f( 1.0f,-1.0f,-1.0f); // Bottom Left Of The Quad (Back) glVertex3f(-1.0f,-1.0f,-1.0f); // Bottom Right Of The Quad (Back) glVertex3f(-1.0f, 1.0f,-1.0f); // Top Right Of The Quad (Back) glVertex3f( 1.0f, 1.0f,-1.0f); // Top Left Of The Quad (Back) glColor3f(0.0f,0.0f,1.0f); // Set The Color To Blue glVertex3f(-1.0f, 1.0f, 1.0f); // Top Right Of The Quad (Left) glVertex3f(-1.0f, 1.0f,-1.0f); // Top Left Of The Quad (Left) glVertex3f(-1.0f,-1.0f,-1.0f); // Bottom Left Of The Quad (Left) glVertex3f(-1.0f,-1.0f, 1.0f); // Bottom Right Of The Quad (Left) glColor3f(1.0f,0.0f,1.0f); // Set The Color To Violet glVertex3f( 1.0f, 1.0f,-1.0f); // Top Right Of The Quad (Right) glVertex3f( 1.0f, 1.0f, 1.0f); // Top Left Of The Quad (Right) glVertex3f( 1.0f,-1.0f, 1.0f); // Bottom Left Of The Quad (Right) glVertex3f( 1.0f,-1.0f,-1.0f); // Bottom Right Of The Quad (Right) glEnd(); // Done Drawing A Quad rtri+=0.02f; // Increase The Rotation Variable For The Triangle ( NEW ) rquad-=0.015f; // Decrease The Rotation Variable For The Quad ( NEW ) // swap buffers to display, since we're double buffered. SDL_GL_SwapBuffers(); return true; } int main(int argc,char* argv[]) { int done; /*variable to hold the file name of the image to be loaded *In real world error handling code would precede this */ /* Initialize SDL for video output */ if ( SDL_Init(SDL_INIT_VIDEO) < 0 ) { fprintf(stderr, "Unable to initialize SDL: %s\n", SDL_GetError()); exit(1); } atexit(SDL_Quit); /* Create a 640x480 OpenGL screen */ if ( SDL_SetVideoMode(640, 480, 0, SDL_OPENGL) == NULL ) { fprintf(stderr, "Unable to create OpenGL screen: %s\n", SDL_GetError()); SDL_Quit(); exit(2); } SDL_WM_SetCaption("another example",NULL); InitGL(640,480); done=0; while (! done) { DrawGLScene(); SDL_Event event; while ( SDL_PollEvent(&event) ) { if ( event.type == SDL_QUIT ) { done = 1; } if ( event.type == SDL_KEYDOWN ) { if ( event.key.keysym.sym == SDLK_ESCAPE ) { done = 1; } } } } } Under netbeans project properties I configured the following: C++ Compiler: added /usr/X11/include and /opt/local/include to the include directories. Linker: I added the following libraries: /usr/X11/lib/libGL.dylib /usr/X11/lib/libGLU.dylib /opt/local/lib/libSDL.dylib /opt/local/lib/libSDLmain.a Now... before I included SDL_main.h and libSDLMain.a to the project I got an error unknown reference to _main then I read here: http://www.libsdl.org/faq.php?action=listentries&category=7#55 that I need to include SDL_Main.h and to link libSDLMain.so to my project. after doing so, the project still won't compile. this is the Netbeans output: /usr/bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .clean-conf rm -f -r build/Debug rm -f dist/Debug/GNU-MacOSX/opengl2 CLEAN SUCCESSFUL (total time: 79ms) /usr/bin/make -f nbproject/Makefile-Debug.mk SUBPROJECTS= .build-conf /usr/bin/make -f nbproject/Makefile-Debug.mk dist/Debug/GNU-MacOSX/opengl2 mkdir -p build/Debug/GNU-MacOSX rm -f build/Debug/GNU-MacOSX/main.o.d g++ -c -g -I/usr/X11/include -I/opt/local/include -MMD -MP -MF build/Debug/GNU-MacOSX/main.o.d -o build/Debug/GNU-MacOSX/main.o main.cpp mkdir -p dist/Debug/GNU-MacOSX g++ -o dist/Debug/GNU-MacOSX/opengl2 build/Debug/GNU-MacOSX/main.o /opt/local/lib/libIL.dylib /opt/local/lib/libILU.dylib /opt/local/lib/libILUT.dylib /usr/X11/lib/libGL.dylib /usr/X11/lib/libGLU.dylib /opt/local/lib/libSDL.dylib /opt/local/lib/libSDLmain.a Undefined symbols: "_OBJC_CLASS_$_NSMenu", referenced from: __objc_classrefs__DATA@0 in libSDLmain.a(SDLMain.o) "__objc_empty_cache", referenced from: _OBJC_METACLASS_$_SDLMain in libSDLmain.a(SDLMain.o) _OBJC_CLASS_$_SDLMain in libSDLmain.a(SDLMain.o) "_CFBundleGetMainBundle", referenced from: -[SDLMain setupWorkingDirectory:] in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) "_CFURLGetFileSystemRepresentation", referenced from: -[SDLMain setupWorkingDirectory:] in libSDLmain.a(SDLMain.o) "_NSApp", referenced from: _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) "_OBJC_CLASS_$_NSProcessInfo", referenced from: __objc_classrefs__DATA@0 in libSDLmain.a(SDLMain.o) "_CFURLCreateCopyDeletingLastPathComponent", referenced from: -[SDLMain setupWorkingDirectory:] in libSDLmain.a(SDLMain.o) "_NSAllocateMemoryPages", referenced from: -[NSString(ReplaceSubString) stringByReplacingRange:with:] in libSDLmain.a(SDLMain.o) "___CFConstantStringClassReference", referenced from: cfstring=CFBundleName in libSDLmain.a(SDLMain.o) cfstring= in libSDLmain.a(SDLMain.o) cfstring=About in libSDLmain.a(SDLMain.o) cfstring=Hide in libSDLmain.a(SDLMain.o) cfstring=h in libSDLmain.a(SDLMain.o) cfstring=Hide Others in libSDLmain.a(SDLMain.o) cfstring=Show All in libSDLmain.a(SDLMain.o) cfstring=Quit in libSDLmain.a(SDLMain.o) cfstring=q in libSDLmain.a(SDLMain.o) cfstring=Window in libSDLmain.a(SDLMain.o) cfstring=m in libSDLmain.a(SDLMain.o) cfstring=Minimize in libSDLmain.a(SDLMain.o) "_OBJC_CLASS_$_NSAutoreleasePool", referenced from: __objc_classrefs__DATA@0 in libSDLmain.a(SDLMain.o) "_CPSEnableForegroundOperation", referenced from: _main in libSDLmain.a(SDLMain.o) "_CPSGetCurrentProcess", referenced from: _main in libSDLmain.a(SDLMain.o) "_CFBundleCopyBundleURL", referenced from: -[SDLMain setupWorkingDirectory:] in libSDLmain.a(SDLMain.o) "_NSDeallocateMemoryPages", referenced from: -[NSString(ReplaceSubString) stringByReplacingRange:with:] in libSDLmain.a(SDLMain.o) "_OBJC_CLASS_$_NSApplication", referenced from: l_OBJC_$_CATEGORY_NSApplication_$_SDLApplication in libSDLmain.a(SDLMain.o) __objc_classrefs__DATA@0 in libSDLmain.a(SDLMain.o) "_CPSSetFrontProcess", referenced from: _main in libSDLmain.a(SDLMain.o) "_OBJC_CLASS_$_NSString", referenced from: l_OBJC_$_CATEGORY_NSString_$_ReplaceSubString in libSDLmain.a(SDLMain.o) __objc_classrefs__DATA@0 in libSDLmain.a(SDLMain.o) "_OBJC_CLASS_$_NSObject", referenced from: _OBJC_CLASS_$_SDLMain in libSDLmain.a(SDLMain.o) "_CFBundleGetInfoDictionary", referenced from: _main in libSDLmain.a(SDLMain.o) "_CFRelease", referenced from: -[SDLMain setupWorkingDirectory:] in libSDLmain.a(SDLMain.o) -[SDLMain setupWorkingDirectory:] in libSDLmain.a(SDLMain.o) "__objc_empty_vtable", referenced from: _OBJC_METACLASS_$_SDLMain in libSDLmain.a(SDLMain.o) _OBJC_CLASS_$_SDLMain in libSDLmain.a(SDLMain.o) "_OBJC_CLASS_$_NSMenuItem", referenced from: __objc_classrefs__DATA@0 in libSDLmain.a(SDLMain.o) "_objc_msgSend", referenced from: -[SDLMain application:openFile:] in libSDLmain.a(SDLMain.o) -[SDLMain applicationDidFinishLaunching:] in libSDLmain.a(SDLMain.o) -[NSString(ReplaceSubString) stringByReplacingRange:with:] in libSDLmain.a(SDLMain.o) -[NSString(ReplaceSubString) stringByReplacingRange:with:] in libSDLmain.a(SDLMain.o) -[NSString(ReplaceSubString) stringByReplacingRange:with:] in libSDLmain.a(SDLMain.o) -[NSString(ReplaceSubString) stringByReplacingRange:with:] in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) _main in libSDLmain.a(SDLMain.o) "_OBJC_METACLASS_$_NSObject", referenced from: _OBJC_METACLASS_$_SDLMain in libSDLmain.a(SDLMain.o) _OBJC_METACLASS_$_SDLMain in libSDLmain.a(SDLMain.o) "_objc_msgSend_fixup", referenced from: l_objc_msgSend_fixup_objectForKey_ in libSDLmain.a(SDLMain.o) l_objc_msgSend_fixup_length in libSDLmain.a(SDLMain.o) l_objc_msgSend_fixup_alloc in libSDLmain.a(SDLMain.o) l_objc_msgSend_fixup_release in libSDLmain.a(SDLMain.o) ld: symbol(s) not found collect2: ld returned 1 exit status make[2]: *** [dist/Debug/GNU-MacOSX/opengl2] Error 1 make[1]: *** [.build-conf] Error 2 make: *** [.build-impl] Error 2 BUILD FAILED (exit value 2, total time: 263ms) any ideas? thanks a lot!

    Read the article

  • ASP.NET MVC 3 Hosting :: Error Handling and CustomErrors in ASP.NET MVC 3 Framework

    - by C. Miller
    So, what else is new in MVC 3? MVC 3 now has a GlobalFilterCollection that is automatically populated with a HandleErrorAttribute. This default FilterAttribute brings with it a new way of handling errors in your web applications. In short, you can now handle errors inside of the MVC pipeline. What does that mean? This gives you direct programmatic control over handling your 500 errors in the same way that ASP.NET and CustomErrors give you configurable control of handling your HTTP error codes. How does that work out? Think of it as a routing table specifically for your Exceptions, it's pretty sweet! Global Filters The new Global.asax file now has a RegisterGlobalFilters method that is used to add filters to the new GlobalFilterCollection, statically located at System.Web.Mvc.GlobalFilter.Filters. By default this method adds one filter, the HandleErrorAttribute. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute());     } HandleErrorAttributes The HandleErrorAttribute is pretty simple in concept: MVC has already adjusted us to using Filter attributes for our AcceptVerbs and RequiresAuthorization, now we are going to use them for (as the name implies) error handling, and we are going to do so on a (also as the name implies) global scale. The HandleErrorAttribute has properties for ExceptionType, View, and Master. The ExceptionType allows you to specify what exception that attribute should handle. The View allows you to specify which error view (page) you want it to redirect to. Last but not least, the Master allows you to control which master page (or as Razor refers to them, Layout) you want to render with, even if that means overriding the default layout specified in the view itself. public class MvcApplication : System.Web.HttpApplication {     public static void RegisterGlobalFilters(GlobalFilterCollection filters)     {         filters.Add(new HandleErrorAttribute         {             ExceptionType = typeof(DbException),             // DbError.cshtml is a view in the Shared folder.             View = "DbError",             Order = 2         });         filters.Add(new HandleErrorAttribute());     }Error Views All of your views still work like they did in the previous version of MVC (except of course that they can now use the Razor engine). However, a view that is used to render an error can not have a specified model! This is because they already have a model, and that is System.Web.Mvc.HandleErrorInfo @model System.Web.Mvc.HandleErrorInfo           @{     ViewBag.Title = "DbError"; } <h2>A Database Error Has Occurred</h2> @if (Model != null) {     <p>@Model.Exception.GetType().Name<br />     thrown in @Model.ControllerName @Model.ActionName</p> }Errors Outside of the MVC Pipeline The HandleErrorAttribute will only handle errors that happen inside of the MVC pipeline, better known as 500 errors. Errors outside of the MVC pipeline are still handled the way they have always been with ASP.NET. You turn on custom errors, specify error codes and paths to error pages, etc. It is important to remember that these will happen for anything and everything outside of what the HandleErrorAttribute handles. Also, these will happen whenever an error is not handled with the HandleErrorAttribute from inside of the pipeline. <system.web>  <customErrors mode="On" defaultRedirect="~/error">     <error statusCode="404" redirect="~/error/notfound"></error>  </customErrors>Sample Controllers public class ExampleController : Controller {     public ActionResult Exception()     {         throw new ArgumentNullException();     }     public ActionResult Db()     {         // Inherits from DbException         throw new MyDbException();     } } public class ErrorController : Controller {     public ActionResult Index()     {         return View();     }     public ActionResult NotFound()     {         return View();     } } Putting It All Together If we have all the code above included in our MVC 3 project, here is how the following scenario's will play out: 1.       A controller action throws an Exception. You will remain on the current page and the global HandleErrorAttributes will render the Error view. 2.       A controller action throws any type of DbException. You will remain on the current page and the global HandleErrorAttributes will render the DbError view. 3.       Go to a non-existent page. You will be redirect to the Error controller's NotFound action by the CustomErrors configuration for HTTP StatusCode 404. But don't take my word for it, download the sample project and try it yourself. Three Important Lessons Learned For the most part this is all pretty straight forward, but there are a few gotcha's that you should remember to watch out for: 1) Error views have models, but they must be of type HandleErrorInfo. It is confusing at first to think that you can't control the M in an MVC page, but it's for a good reason. Errors can come from any action in any controller, and no redirect is taking place, so the view engine is just going to render an error view with the only data it has: The HandleError Info model. Do not try to set the model on your error page or pass in a different object through a controller action, it will just blow up and cause a second exception after your first exception! 2) When the HandleErrorAttribute renders a page, it does not pass through a controller or an action. The standard web.config CustomErrors literally redirect a failed request to a new page. The HandleErrorAttribute is just rendering a view, so it is not going to pass through a controller action. But that's ok! Remember, a controller's job is to get the model for a view, but an error already has a model ready to give to the view, thus there is no need to pass through a controller. That being said, the normal ASP.NET custom errors still need to route through controllers. So if you want to share an error page between the HandleErrorAttribute and your web.config redirects, you will need to create a controller action and route for it. But then when you render that error view from your action, you can only use the HandlerErrorInfo model or ViewData dictionary to populate your page. 3) The HandleErrorAttribute obeys if CustomErrors are on or off, but does not use their redirects. If you turn CustomErrors off in your web.config, the HandleErrorAttributes will stop handling errors. However, that is the only configuration these two mechanisms share. The HandleErrorAttribute will not use your defaultRedirect property, or any other errors registered with customer errors. In Summary The HandleErrorAttribute is for displaying 500 errors that were caused by exceptions inside of the MVC pipeline. The custom errors are for redirecting from error pages caused by other HTTP codes.

    Read the article

  • JSP Precompilation for ADF Applications

    - by Duncan Mills
    A question that comes up from time to time, particularly in relation to build automation, is how to best pre-compile the .jspx and .jsff files in an ADF application. Thus ensuring that the app is ready to run as soon as it's installed into WebLogic. In the normal run of things, the first poor soul to hit a page pays the price and has to wait a little whilst the JSP is compiled into a servlet. Everyone else subsequently gets a free lunch. So it's a reasonable thing to want to do... Let Me List the Ways So forth to Google (other search engines are available)... which lead me to a fairly old article on WLDJ - Removing Performance Bottlenecks Through JSP Precompilation. Technololgy wise, it's somewhat out of date, but the one good point that it made is that it's really not very useful to try and use the precompile option in the weblogic.xml file. That's a really good observation - particularly if you're trying to integrate a pre-compile step into a Hudson Continuous Integration process. That same article mentioned an alternative approach for programmatic pre-compilation using weblogic.jspc. This seemed like a much more useful approach for a CI environment. However, weblogic.jspc is now obsoleted by weblogic.appc so we'll use that instead.  Thanks to Steve for the pointer there. And So To APPC APPC has documentation - always a great place to start, and supports usage both from Ant via the wlappc task and from the command line using the weblogic.appc command. In my testing I took the latter approach. Usage, as the documentation will show you, is superficially pretty simple.  The nice thing here, is that you can pass an existing EAR file (generated of course using OJDeploy) and that EAR will be updated in place with the freshly compiled servlet classes created from the JSPs. Appc takes care of all the unpacking, compiling and re-packing of the EAR for you. Neat.  So we're done right...? Not quite. The Devil is in the Detail  OK so I'm being overly dramatic but it's not all plain sailing, so here's a short guide to using weblogic.appc to compile a simple ADF application without pain.  Information You'll Need The following is based on the assumption that you have a stand-alone WLS install with the Application Development  Runtime installed and a suitable ADF enabled domain created. This could of course all be run off of a JDeveloper install as well 1. Your Weblogic home directory. Everything you need is relative to this so make a note.  In my case it's c:\builds\wls_ps4. 2. Next deploy your EAR as normal and have a peek inside it using your favourite zip management tool. First of all look at the weblogic-application.xml inside the EAR /META-INF directory. Have a look for any library references. Something like this: <library-ref>    <library-name>adf.oracle.domain</library-name> </library-ref>   Make a note of the library ref (adf.oracle.domain in this case) , you'll need that in a second. 3. Next open the nested WAR file within the EAR and then have a peek inside the weblogic.xml file in the /WEB-INF directory. Again  make a note of the library references. 4. Now start the WebLogic as per normal and run the WebLogic console app (e.g. http://localhost:7001/console). In the Domain Structure navigator, select Deployments. 5. For each of the libraries you noted down drill into the library definition and make a note of the .war, .ear or .jar that defines the library. For example, in my case adf.oracle.domain maps to "C:\ builds\ WLS_PS4\ oracle_common\ modules\ oracle. adf. model_11. 1. 1\ adf. oracle. domain. ear". Note the extra spaces that are salted throughout this string as it is displayed in the console - just to make it annoying, you'll have to strip these out. 6. Finally you'll need the location of the adfsharebean.jar. We need to pass this on the classpath for APPC so that the ADFConfigLifeCycleCallBack listener can be found. In a more complex app of your own you may need additional classpath entries as well.  Now we're ready to go, and it's a simple matter of applying the information we have gathered into the relevant command line arguments for the utility A Simple CMD File to Run APPC  Here's the stub .cmd file I'm using on Windows to run this. @echo offREM Stub weblogic.appc Runner setlocal set WLS_HOME=C:\builds\WLS_PS4 set ADF_LIB_ROOT=%WLS_HOME%\oracle_common\modulesset COMMON_LIB_ROOT=%WLS_HOME%\wlserver_10.3\common\deployable-libraries set ADF_WEBAPP=%ADF_LIB_ROOT%\oracle.adf.view_11.1.1\adf.oracle.domain.webapp.war set ADF_DOMAIN=%ADF_LIB_ROOT%\oracle.adf.model_11.1.1\adf.oracle.domain.ear set JSTL=%COMMON_LIB_ROOT%\jstl-1.2.war set JSF=%COMMON_LIB_ROOT%\jsf-1.2.war set ADF_SHARE=%ADF_LIB_ROOT%\oracle.adf.share_11.1.1\adfsharembean.jar REM Set up the WebLogic Environment so appc can be found call %WLS_HOME%\wlserver_10.3\server\bin\setWLSEnv.cmd CLS REM Now compile away!java weblogic.appc -verbose -library %ADF_WEBAPP%,%ADF_DOMAIN%,%JSTL%,%JSF% -classpath %ADF_SHARE% %1 endlocal Running the above on a target ADF .ear  file will zip through and create all of the relevant compiled classes inside your nested .war file in the \WEB-INF\classes\jsp_servlet\ directory (but don't take my word for it, run it and take a look!) And So... In the immortal words of  the Pet Shop Boys, Was It Worth It? Well, here's where you'll have to do your own testing. In  my case here, with a simple ADF application, pre-compilation shaved an non-scientific "3 Elephants" off of the initial page load time for the first access of each page. That's a pretty significant payback for such a simple step to add into your CI process, so why not give it a go.

    Read the article

  • Python - Converting CSV to Objects - Code Design

    - by victorhooi
    Hi, I have a small script we're using to read in a CSV file containing employees, and perform some basic manipulations on that data. We read in the data (import_gd_dump), and create an Employees object, containing a list of Employee objects (maybe I should think of a better naming convention...lol). We then call clean_all_phone_numbers() on Employees, which calls clean_phone_number() on each Employee, as well as lookup_all_supervisors(), on Employees. import csv import re import sys #class CSVLoader: # """Virtual class to assist with loading in CSV files.""" # def import_gd_dump(self, input_file='Gp Directory 20100331 original.csv'): # gd_extract = csv.DictReader(open(input_file), dialect='excel') # employees = [] # for row in gd_extract: # curr_employee = Employee(row) # employees.append(curr_employee) # return employees # #self.employees = {row['dbdirid']:row for row in gd_extract} # Previously, this was inside a (virtual) class called "CSVLoader". # However, according to here (http://tomayko.com/writings/the-static-method-thing) - the idiomatic way of doing this in Python is not with a class-fucntion but with a module-level function def import_gd_dump(input_file='Gp Directory 20100331 original.csv'): """Return a list ('employee') of dict objects, taken from a Group Directory CSV file.""" gd_extract = csv.DictReader(open(input_file), dialect='excel') employees = [] for row in gd_extract: employees.append(row) return employees def write_gd_formatted(employees_dict, output_file="gd_formatted.csv"): """Read in an Employees() object, and write out each Employee() inside this to a CSV file""" gd_output_fieldnames = ('hrid', 'mail', 'givenName', 'sn', 'dbcostcenter', 'dbdirid', 'hrreportsto', 'PHFull', 'PHFull_message', 'SupervisorEmail', 'SupervisorFirstName', 'SupervisorSurname') try: gd_formatted = csv.DictWriter(open(output_file, 'w', newline=''), fieldnames=gd_output_fieldnames, extrasaction='ignore', dialect='excel') except IOError: print('Unable to open file, IO error (Is it locked?)') sys.exit(1) headers = {n:n for n in gd_output_fieldnames} gd_formatted.writerow(headers) for employee in employees_dict.employee_list: # We're using the employee object's inbuilt __dict__ attribute - hmm, is this good practice? gd_formatted.writerow(employee.__dict__) class Employee: """An Employee in the system, with employee attributes (name, email, cost-centre etc.)""" def __init__(self, employee_attributes): """We use the Employee constructor to convert a dictionary into instance attributes.""" for k, v in employee_attributes.items(): setattr(self, k, v) def clean_phone_number(self): """Perform some rudimentary checks and corrections, to make sure numbers are in the right format. Numbers should be in the form 0XYYYYYYYY, where X is the area code, and Y is the local number.""" if self.telephoneNumber is None or self.telephoneNumber == '': return '', 'Missing phone number.' else: standard_format = re.compile(r'^\+(?P<intl_prefix>\d{2})\((?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') extra_zero = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})-(?P<local_second_half>\d{4})') missing_hyphen = re.compile(r'^\+(?P<intl_prefix>\d{2})\(0(?P<area_code>\d)\)(?P<local_first_half>\d{4})(?P<local_second_half>\d{4})') if standard_format.search(self.telephoneNumber): result = standard_format.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), '' elif extra_zero.search(self.telephoneNumber): result = extra_zero.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Extra zero in area code - ask user to remediate. ' elif missing_hyphen.search(self.telephoneNumber): result = missing_hyphen.search(self.telephoneNumber) return '0' + result.group('area_code') + result.group('local_first_half') + result.group('local_second_half'), 'Missing hyphen in local component - ask user to remediate. ' else: return '', "Number didn't match recognised format. Original text is: " + self.telephoneNumber class Employees: def __init__(self, import_list): self.employee_list = [] for employee in import_list: self.employee_list.append(Employee(employee)) def clean_all_phone_numbers(self): for employee in self.employee_list: #Should we just set this directly in Employee.clean_phone_number() instead? employee.PHFull, employee.PHFull_message = employee.clean_phone_number() # Hmm, the search is O(n^2) - there's probably a better way of doing this search? def lookup_all_supervisors(self): for employee in self.employee_list: if employee.hrreportsto is not None and employee.hrreportsto != '': for supervisor in self.employee_list: if supervisor.hrid == employee.hrreportsto: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = supervisor.mail, supervisor.givenName, supervisor.sn break else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not found.', 'Supervisor not found.', 'Supervisor not found.') else: (employee.SupervisorEmail, employee.SupervisorFirstName, employee.SupervisorSurname) = ('Supervisor not set.', 'Supervisor not set.', 'Supervisor not set.') #Is thre a more pythonic way of doing this? def print_employees(self): for employee in self.employee_list: print(employee.__dict__) if __name__ == '__main__': db_employees = Employees(import_gd_dump()) db_employees.clean_all_phone_numbers() db_employees.lookup_all_supervisors() #db_employees.print_employees() write_gd_formatted(db_employees) Firstly, my preamble question is, can you see anything inherently wrong with the above, from either a class design or Python point-of-view? Is the logic/design sound? Anyhow, to the specifics: The Employees object has a method, clean_all_phone_numbers(), which calls clean_phone_number() on each Employee object inside it. Is this bad design? If so, why? Also, is the way I'm calling lookup_all_supervisors() bad? Originally, I wrapped the clean_phone_number() and lookup_supervisor() method in a single function, with a single for-loop inside it. clean_phone_number is O(n), I believe, lookup_supervisor is O(n^2) - is it ok splitting it into two loops like this? In clean_all_phone_numbers(), I'm looping on the Employee objects, and settings their values using return/assignment - should I be setting this inside clean_phone_number() itself? There's also a few things that I'm sorted of hacked out, not sure if they're bad practice - e.g. print_employee() and gd_formatted() both use __dict__, and the constructor for Employee uses setattr() to convert a dictionary into instance attributes. I'd value any thoughts at all. If you think the questions are too broad, let me know and I can repost as several split up (I just didn't want to pollute the boards with multiple similar questions, and the three questions are more or less fairly tightly related). Cheers, Victor

    Read the article

  • HTML5 Form: Page Is Reloading Instantly After Restyling (And It Shouldn't Be)

    - by user3689753
    I have a form. I have made it so that if your name is not put in, a red border is put on the name field. That works, BUT...it's for a split second, and then the page reloads. I want the red border to appear, and then stay there. For some reason it's for a split second. Can someone help me make it so the page doesn't reload after displaying the red border? Here's the script. window.onload = function() { document.getElementById("Hogwarts").onsubmit = function () { window.alert("Form submitted. Owl being sent..."); var fname = document.getElementById("fName"); if(!fName.value.match("^[A-Z][A-Za-z]+( [A-Z][A-Za-z]*)*$")) { window.alert("You must enter your name."); addClass(fName, "errorDisp"); document.getElementById("fName").focus(); } else return true; } } function addClass(element, classToAdd) { var currentClassValue = element.className; if (currentClassValue.indexOf(classToAdd) == -1) { if ((currentClassValue == null) || (currentClassValue === "")) { element.className = classToAdd; } else { element.className += " " + classToAdd; } } } function removeClass(element, classToRemove) { var currentClassValue = element.className; if (currentClassValue == classToRemove) { element.className = ""; return; } var classValues = currentClassValue.split(" "); var filteredList = []; for (var i = 0 ; i < classValues.length; i++) { if (classToRemove != classValues[i]) { filteredList.push(classValues[i]); } } element.className = filteredList.join(" "); } Here's the HTML. <!DOCTYPE HTML> <html lang="en"> <head> <meta charset="UTF-8" /> <meta name="viewport" content="width=device-width, initial-scale=1, maximum-scale=1"> <title>Hogwarts School of Witchcraft And Wizardry Application Form</title> <link rel="stylesheet" type="text/css" href="main.css" media="screen"/> <script src="script.js" type="text/javascript"></script> </head> <body> <section> <header> <h1>Hogwarts School of Witchcraft And Wizardry</h1> <nav></nav> </header> <main> <form method="post" id="Hogwarts"> <!--<form action="showform.php" method="post" id="Hogwarts">--!> <fieldset id="aboutMe"> <legend id="aboutMeLeg">About Me</legend> <div class="fieldleading"> <label for="fName" class="labelstyle">First name:</label> <input type="text" id="fName" name="fName" autofocus maxlength="50" value="" placeholder="First Name" size="30"> <label for="lName" class="labelstyle">Last name:</label> <input type="text" id="lName" name="lName" required maxlength="50" value="" placeholder="Last Name" pattern="^[A-Za-z ]{3,}$" size="30"> <label for="age" class="labelstyle">Age:</label> <input type="number" id="age" name="age" required min="17" step="1" max="59" value="" placeholder="Age"> </div> <div class="fieldleading"> <label for="date" class="labelstyle">Date Of Birth:</label> <input type="date" name="date1" id="date" required autofocus value=""> </div> <div id="whitegender"> <div class="fieldleading"> <label class="labelstyle">Gender:</label> </div> <input type="radio" name="sex" value="male" class="gender" required="required">Male<br/> <input type="radio" name="sex" value="female" class="gender" required="required">Female<br/> <input type="radio" name="sex" value="other" class="gender" required="required">Other </div> </fieldset> <fieldset id="contactInfo"> <legend id="contactInfoLeg">Contact Information</legend> <div class="fieldleading"> <label for="street" class="labelstyle">Street Address:</label> <input type="text" id="street" name="street" required autofocus maxlength="50" value="" placeholder="Street Address" pattern="^[0-9A-Za-z\. ]+{5,}$" size="35"> <label for="city" class="labelstyle">City:</label> <input type="text" id="city" name="city" required autofocus maxlength="30" value="" placeholder="City" pattern="^[A-Za-z ]{3,}$" size="35"> <label for="State" class="labelstyle">State:</label> <select required id="State" name="State" > <option value="Select Your State">Select Your State</option> <option value="Delaware">Delaware</option> <option value="Pennsylvania">Pennsylvania</option> <option value="New Jersey">New Jersey</option> <option value="Georgia">Georgia</option> <option value="Connecticut">Connecticut</option> <option value="Massachusetts">Massachusetts</option> <option value="Maryland">Maryland</option> <option value="New Hampshire">New Hampshire</option> <option value="New York">New York</option> <option value="Virginia">Virginia</option> </select> </div> <div class="fieldleading"> <label for="zip" class="labelstyle">5-Digit Zip Code:</label> <input id="zip" name="zip" required autofocus maxlength="5" value="" placeholder="Your Zip Code" pattern="^\d{5}$"> <label for="usrtel" class="labelstyle">10-Digit Telephone Number:</label> <input type="tel" name="usrtel" id="usrtel" required autofocus value="" placeholder="123-456-7890" pattern="^\d{3}[\-]\d{3}[\-]\d{4}$"> </div> <div class="fieldleading"> <label for="email1" class="labelstyle">Email:</label> <input type="email" name="email1" id="email1" required autofocus value="" placeholder="[email protected]" pattern="^[a-z0-9._%+-]+@[a-z0-9.-]+\.[a-z]{2,4}$" size="35"> <label for="homepage1" class="labelstyle">Home Page:</label> <input type="url" name="homepage1" id="homepage1" required autofocus value="" placeholder="http://www.hp.com" pattern="https?://.+" size="35"> </div> </fieldset> <fieldset id="yourInterests"> <legend id="yourInterestsLeg">Your Interests</legend> <label for="Major" class="labelstyle">Major/Program Choice:</label> <select required id="Major" name="Major" > <option value="">Select Your Major</option> <option value="Magic1">Magic Horticulture</option> <option value="Magic2">Black Magic</option> <option value="White">White Magic</option> <option value="Blue">Blue Magic</option> <option value="Non">Non-Wizardry Studies</option> </select> </fieldset> <button type="submit" value="Submit" class="submitreset">Submit</button> <button type="reset" value="Reset" class="submitreset">Reset</button> </form> </main> <footer> &copy; 2014 Bennett Nestok </footer> </section> </body> </html> Here's the CSS. a:link { text-decoration: none !important; color:black !important; } a:visited { text-decoration: none !important; color:red !important; } a:active { text-decoration: none !important; color:green !important; } a:hover { text-decoration: none !important; color:blue !important; background-color:white !important; } ::-webkit-input-placeholder { color: #ffffff; } /* gray80 */ :-moz-placeholder { color: #ffffff; } /* Firefox 18- (one color)*/ ::-moz-placeholder { color: #ffffff; } /* Firefox 19+ (double colons) */ :-ms-input-placeholder { color: #ffffff; } body { margin: 0px auto; text-align:center; background-color:grey; font-weight:normal; font-size:12px; font-family: verdana; color:black; background-image:url('bgtexture.jpg'); background-repeat:repeat; } footer { text-align:center; margin: 0px auto; bottom:0px; position:absolute; width:100%; color:white; background-color:black; height:20px; padding-top:4px; } h1 { color:white; text-align:center; margin: 0px auto; margin-bottom:50px; width:100%; background-color:black; padding-top: 13px; padding-bottom: 14px; -moz-box-shadow: 0 1px 3px rgba(0,0,0,0.5); -webkit-box-shadow: 0 1px 3px rgba(0,0,0,0.5); text-shadow: 0 -1px 1px rgba(0,0,0,0.25); } button.submitreset { -moz-border-radius: 400px; -webkit-border-radius: 400px; border-radius: 400px; -moz-box-shadow: 0 1px 3px rgba(0,0,0,0.5); -webkit-box-shadow: 0 1px 3px rgba(0,0,0,0.5); text-shadow: 0 -1px 1px rgba(0,0,0,0.25); } .labelstyle { background-color:#a7a7a7; color:black; -moz-border-radius: 400px; -webkit-border-radius: 400px; border-radius: 400px; padding:3px 3px 3px 3px; } #aboutMe, #contactInfo, #yourInterests { margin-bottom:30px; text-align:left !important; padding: 10px 10px 10px 10px; } #Hogwarts { text-align:center; margin:0px auto; width:780px; padding-top: 20px !important; padding-bottom: 20px !important; background: -webkit-linear-gradient(#474747, grey); /* For Safari 5.1 to 6.0 */ background: -o-linear-gradient(#474747, grey); /* For Opera 11.1 to 12.0 */ background: -moz-linear-gradient(#474747, grey); /* For Firefox 3.6 to 15 */ background: linear-gradient(#474747, grey); /* Standard syntax */ border-color:black; border-style: solid; border-width: 2px; -moz-box-shadow: 0 1px 3px rgba(0,0,0,0.5); -webkit-box-shadow: 0 1px 3px rgba(0,0,0,0.5); text-shadow: 0 -1px 1px rgba(0,0,0,0.25); } @media (max-width: 800px){ .labelstyle { display: none; } #Hogwarts { width:300px; } h1 { width:304px; margin-bottom:0px; } .fieldleading { margin-bottom:0px !important; } ::-webkit-input-label { /* WebKit browsers */ color: transparent; } :-moz-label { /* Mozilla Firefox 4 to 18 */ color: transparent; } ::-moz-label { /* Mozilla Firefox 19+ */ color: transparent; } :-ms-input-label { /* Internet Explorer 10+ */ color: transparent; } ::-webkit-input-placeholder { /* WebKit browsers */ color: grey !important; } :-moz-placeholder { /* Mozilla Firefox 4 to 18 */ color: grey !important; } ::-moz-placeholder { /* Mozilla Firefox 19+ */ color: grey !important; } :-ms-input-placeholder { /* Internet Explorer 10+ */ color: grey !important; } #aboutMe, #contactInfo, #yourInterests { margin-bottom:10px; text-align:left !important; padding: 5px 5px 5px 5px; } } br { display: block; line-height: 10px; } .fieldleading { margin-bottom:10px; } legend { color:white; } #whitegender { color:white; } #moreleading { margin-bottom:10px; } /*opera only hack attempt*/ @media not all and (-webkit-min-device-pixel-ratio:0) { .fieldleading { margin-bottom:30px !important; } } .errorDisp { border-color: red; border-style: solid; border-width: 2px; }

    Read the article

  • Online ALTER TABLE in MySQL 5.6

    - by Marko Mäkelä
    This is the low-level view of data dictionary language (DDL) operations in the InnoDB storage engine in MySQL 5.6. John Russell gave a more high-level view in his blog post April 2012 Labs Release – Online DDL Improvements. MySQL before the InnoDB Plugin Traditionally, the MySQL storage engine interface has taken a minimalistic approach to data definition language. The only natively supported operations were CREATE TABLE, DROP TABLE and RENAME TABLE. Consider the following example: CREATE TABLE t(a INT); INSERT INTO t VALUES (1),(2),(3); CREATE INDEX a ON t(a); DROP TABLE t; The CREATE INDEX statement would be executed roughly as follows: CREATE TABLE temp(a INT, INDEX(a)); INSERT INTO temp SELECT * FROM t; RENAME TABLE t TO temp2; RENAME TABLE temp TO t; DROP TABLE temp2; You could imagine that the database could crash when copying all rows from the original table to the new one. For example, it could run out of file space. Then, on restart, InnoDB would roll back the huge INSERT transaction. To fix things a little, a hack was added to ha_innobase::write_row for committing the transaction every 10,000 rows. Still, it was frustrating that even a simple DROP INDEX would make the table unavailable for modifications for a long time. Fast Index Creation in the InnoDB Plugin of MySQL 5.1 MySQL 5.1 introduced a new interface for CREATE INDEX and DROP INDEX. The old table-copying approach can still be forced by SET old_alter_table=0. This interface is used in MySQL 5.5 and in the InnoDB Plugin for MySQL 5.1. Apart from the ability to do a quick DROP INDEX, the main advantage is that InnoDB will execute a merge-sort algorithm before inserting the index records into each index that is being created. This should speed up the insert into the secondary index B-trees and potentially result in a better B-tree fill factor. The 5.1 ALTER TABLE interface was not perfect. For example, DROP FOREIGN KEY still invoked the table copy. Renaming columns could conflict with InnoDB foreign key constraints. Combining ADD KEY and DROP KEY in ALTER TABLE was problematic and not atomic inside the storage engine. The ALTER TABLE interface in MySQL 5.6 The ALTER TABLE storage engine interface was completely rewritten in MySQL 5.6. Instead of introducing a method call for every conceivable operation, MySQL 5.6 introduced a handful of methods, and data structures that keep track of the requested changes. In MySQL 5.6, online ALTER TABLE operation can be requested by specifying LOCK=NONE. Also LOCK=SHARED and LOCK=EXCLUSIVE are available. The old-style table copying can be requested by ALGORITHM=COPY. That one will require at least LOCK=SHARED. From the InnoDB point of view, anything that is possible with LOCK=EXCLUSIVE is also possible with LOCK=SHARED. Most ALGORITHM=INPLACE operations inside InnoDB can be executed online (LOCK=NONE). InnoDB will always require an exclusive table lock in two phases of the operation. The execution phases are tied to a number of methods: handler::check_if_supported_inplace_alter Checks if the storage engine can perform all requested operations, and if so, what kind of locking is needed. handler::prepare_inplace_alter_table InnoDB uses this method to set up the data dictionary cache for upcoming CREATE INDEX operation. We need stubs for the new indexes, so that we can keep track of changes to the table during online index creation. Also, crash recovery would drop any indexes that were incomplete at the time of the crash. handler::inplace_alter_table In InnoDB, this method is used for creating secondary indexes or for rebuilding the table. This is the ‘main’ phase that can be executed online (with concurrent writes to the table). handler::commit_inplace_alter_table This is where the operation is committed or rolled back. Here, InnoDB would drop any indexes, rename any columns, drop or add foreign keys, and finalize a table rebuild or index creation. It would also discard any logs that were set up for online index creation or table rebuild. The prepare and commit phases require an exclusive lock, blocking all access to the table. If MySQL times out while upgrading the table meta-data lock for the commit phase, it will roll back the ALTER TABLE operation. In MySQL 5.6, data definition language operations are still not fully atomic, because the data dictionary is split. Part of it is inside InnoDB data dictionary tables. Part of the information is only available in the *.frm file, which is not covered by any crash recovery log. But, there is a single commit phase inside the storage engine. Online Secondary Index Creation It may occur that an index needs to be created on a new column to speed up queries. But, it may be unacceptable to block modifications on the table while creating the index. It turns out that it is conceptually not so hard to support online index creation. All we need is some more execution phases: Set up a stub for the index, for logging changes. Scan the table for index records. Sort the index records. Bulk load the index records. Apply the logged changes. Replace the stub with the actual index. Threads that modify the table will log the operations to the logs of each index that is being created. Errors, such as log overflow or uniqueness violations, will only be flagged by the ALTER TABLE thread. The log is conceptually similar to the InnoDB change buffer. The bulk load of index records will bypass record locking. We still generate redo log for writing the index pages. It would suffice to log page allocations only, and to flush the index pages from the buffer pool to the file system upon completion. Native ALTER TABLE Starting with MySQL 5.6, InnoDB supports most ALTER TABLE operations natively. The notable exceptions are changes to the column type, ADD FOREIGN KEY except when foreign_key_checks=0, and changes to tables that contain FULLTEXT indexes. The keyword ALGORITHM=INPLACE is somewhat misleading, because certain operations cannot be performed in-place. For example, changing the ROW_FORMAT of a table requires a rebuild. Online operation (LOCK=NONE) is not allowed in the following cases: when adding an AUTO_INCREMENT column, when the table contains FULLTEXT indexes or a hidden FTS_DOC_ID column, or when there are FOREIGN KEY constraints referring to the table, with ON…CASCADE or ON…SET NULL option. The FOREIGN KEY limitations are needed, because MySQL does not acquire meta-data locks on the child or parent tables when executing SQL statements. Theoretically, InnoDB could support operations like ADD COLUMN and DROP COLUMN in-place, by lazily converting the table to a newer format. This would require that the data dictionary keep multiple versions of the table definition. For simplicity, we will copy the entire table, even for DROP COLUMN. The bulk copying of the table will bypass record locking and undo logging. For facilitating online operation, a temporary log will be associated with the clustered index of table. Threads that modify the table will also write the changes to the log. When altering the table, we skip all records that have been marked for deletion. In this way, we can simply discard any undo log records that were not yet purged from the original table. Off-page columns, or BLOBs, are an important consideration. We suspend the purge of delete-marked records if it would free any off-page columns from the old table. This is because the BLOBs can be needed when applying changes from the log. We have special logging for handling the ROLLBACK of an INSERT that inserted new off-page columns. This is because the columns will be freed at rollback.

    Read the article

  • SpringMvc java.lang.NullPointerException When Posting Form To Server [closed]

    - by dev_darin
    I have a form with a user name field on it when i tab out of the field i use a RESTFUL Web Service that makes a call to a handler method in the controller. The method makes a call to a DAO class that checks the database if the user name exists. This works fine, however when the form is posted to the server i call the same exact function i would call in the handler method however i get a java.lang.NullPointerException when it accesses the class that makes a call to the DAO object. So it does not even access the DAO object the second time. I have exception handlers around the calls in all my classes that makes calls. Any ideas as to whats happening here why i would get the java.lang.NullPointerException the second time the function is called.Does this have anything to do with Spring instantiating DAO classes using a Singleton method or something to that effect? What can be done to resolve this? This is what happens the First Time The Method is called using the Web Service(this is suppose to happen): 13011 [http-8084-2] INFO com.crimetrack.jdbc.JdbcOfficersDAO - Inside jdbcOfficersDAO 13031 [http-8084-2] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL query 13034 [http-8084-2] DEBUG org.springframework.jdbc.core.JdbcTemplate - Executing prepared SQL statement [SELECT userName FROM crimetrack.tblofficers WHERE userName = ?] 13071 [http-8084-2] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Fetching JDBC Connection from DataSource 13496 [http-8084-2] DEBUG org.springframework.jdbc.core.StatementCreatorUtils - Setting SQL statement parameter value: column index 1, parameter value [adminz], value class [java.lang.String], SQL type unknown 13534 [http-8084-2] DEBUG org.springframework.jdbc.datasource.DataSourceUtils - Returning JDBC Connection to DataSource 13537 [http-8084-2] INFO com.crimetrack.jdbc.JdbcOfficersDAO - No username was found in exception 13537 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - UserName :adminz does NOT exist The Second time When The Form Is 'Post' and a validation method handles the form and calls the same method the web service would call: 17199 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - UserName is not null so going to check if its valid for :adminz 17199 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - User Name in try.....catch block is adminz 17199 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - Inside Do UserNameExist about to validate with username : adminz 17199 [http-8084-2] INFO com.crimetrack.service.ValidateUserNameManager - UserName :adminz EXCEPTION OCCURED java.lang.NullPointerException ValidateUserNameManager.java public class ValidateUserNameManager implements ValidateUserNameIFace { private OfficersDAO officerDao; private final Logger logger = Logger.getLogger(getClass()); public boolean DoesUserNameExist(String userName) throws Exception { logger.info("Inside Do UserNameExist about to validate with username : " + userName); try{ if(officerDao.OfficerExist(userName) == true){ logger.info("UserName :" + userName + " does exist"); return true; }else{ logger.info("UserName :" + userName + " does NOT exist"); return false; } }catch(Exception e){ logger.info("UserName :" + userName + " EXCEPTION OCCURED " + e.toString()); return false; } } /** * @return the officerDao */ public OfficersDAO getOfficerDao() { return officerDao; } /** * @param officerdao the officerDao to set */ public void setOfficerDao(OfficersDAO officerDao) { this.officerDao = officerDao; } } JdbcOfficersDAO.java public boolean OfficerExist(String userName){ String dbUserName; try{ logger.info("Inside jdbcOfficersDAO"); String sql = "SELECT userName FROM crimetrack.tblofficers WHERE userName = ?"; try{ dbUserName = (String)getJdbcTemplate().queryForObject(sql, new Object[]{userName},String.class); logger.info("Just Returned from database"); }catch(Exception e){ logger.info("No username was found in exception"); return false; } if(dbUserName == null){ logger.info("Database did not find any matching records"); } logger.info("after JdbcTemplate"); if (dbUserName.equals(userName)) { logger.info("User Name Exists"); return true; }else{ logger.info("User Name Does NOT Exists"); return false; } }catch(Exception e){ logger.info("Exception Message in JdbcOfficersDAO is "+e.getMessage()); return false; } } OfficerRegistrationValidation.java public class OfficerRegistrationValidation implements Validator{ private final Logger logger = Logger.getLogger(getClass()); private ValidateUserNameManager validateUserNameManager; public boolean supports(Class<?> clazz) { return Officers.class.equals(clazz); } public void validate(Object target, Errors errors) { Officers officer = (Officers) target; if (officer.getUserName() == null){ errors.rejectValue("userName", "userName.required"); }else{ String userName = officer.getUserName(); logger.info("UserName is not null so going to check if its valid for :" + userName); try { logger.info("User Name in try.....catch block is " + userName); if (validateUserNameManager.DoesUserNameExist(userName)== true){ errors.rejectValue("userName", "userName.exist"); } } catch (Exception e) { logger.info("Error Occured When validating UserName"); errors.rejectValue("userName", "userName.error"); } } if(officer.getPassword()== null){ errors.rejectValue("password", "password.required"); } if(officer.getPassword2()== null){ errors.rejectValue("password2", "password2.required"); } if(officer.getfName() == null){ errors.rejectValue("fName","fName.required"); } if(officer.getlName() == null){ errors.rejectValue("lName", "lName.required"); } if (officer.getoName() == null){ errors.rejectValue("oName", "oName.required"); } if (officer.getEmailAdd() == null){ errors.rejectValue("emailAdd", "emailAdd.required"); } if (officer.getDob() == null){ errors.rejectValue("dob", "dob.required"); } if (officer.getGenderId().equals("A")){ errors.rejectValue("genderId","genderId.required"); } if(officer.getDivisionNo() == 1){ errors.rejectValue("divisionNo", "divisionNo.required"); } if(officer.getPositionId() == 1){ errors.rejectValue("positionId", "positionId.required"); } if (officer.getStartDate() == null){ errors.rejectValue("startDate","startDate.required"); } if(officer.getEndDate() == null){ errors.rejectValue("endDate","endDate.required"); } logger.info("The Gender ID is " + officer.getGenderId().toString()); if(officer.getPhoneNo() == null){ errors.rejectValue("phoneNo", "phoneNo.required"); } } /** * @return the validateUserNameManager */ public ValidateUserNameManager getValidateUserNameManager() { return validateUserNameManager; } /** * @param validateUserNameManager the validateUserNameManager to set */ public void setValidateUserNameManager( ValidateUserNameManager validateUserNameManager) { this.validateUserNameManager = validateUserNameManager; } } Update Error Log using Logger.Error("Message", e): 39024 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - UserName is not null so going to check if its valid for :adminz 39025 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - User Name in try.....catch block is adminz 39025 [http-8084-2] ERROR com.crimetrack.service.OfficerRegistrationValidation - Message java.lang.NullPointerException at com.crimetrack.service.OfficerRegistrationValidation.validate(OfficerRegistrationValidation.java:47) at org.springframework.validation.DataBinder.validate(DataBinder.java:725) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.doBind(HandlerMethodInvoker.java:815) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.resolveHandlerArguments(HandlerMethodInvoker.java:367) at org.springframework.web.bind.annotation.support.HandlerMethodInvoker.invokeHandlerMethod(HandlerMethodInvoker.java:171) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.invokeHandlerMethod(AnnotationMethodHandlerAdapter.java:436) at org.springframework.web.servlet.mvc.annotation.AnnotationMethodHandlerAdapter.handle(AnnotationMethodHandlerAdapter.java:424) at org.springframework.web.servlet.DispatcherServlet.doDispatch(DispatcherServlet.java:923) at org.springframework.web.servlet.DispatcherServlet.doService(DispatcherServlet.java:852) at org.springframework.web.servlet.FrameworkServlet.processRequest(FrameworkServlet.java:882) at org.springframework.web.servlet.FrameworkServlet.doPost(FrameworkServlet.java:789) at javax.servlet.http.HttpServlet.service(HttpServlet.java:637) at javax.servlet.http.HttpServlet.service(HttpServlet.java:717) at org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290) at org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206) at org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233) at org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191) at org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127) at org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102) at org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109) at org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:293) at org.apache.coyote.http11.Http11Processor.process(Http11Processor.java:859) at org.apache.coyote.http11.Http11Protocol$Http11ConnectionHandler.process(Http11Protocol.java:602) at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:489) at java.lang.Thread.run(Unknown Source) 39025 [http-8084-2] INFO com.crimetrack.service.OfficerRegistrationValidation - Error Occured When validating UserName

    Read the article

  • Implementing Release Notes in TFS Team Build 2010

    - by Jakob Ehn
    In TFS Team Build (all versions), each build is associated with changesets and work items. To determine which changesets that should be associated with the current build, Team Build finds the label of the “Last Good Build” an then aggregates all changesets up unitl the label for the current build. Basically this means that if your build is failing, every changeset that is checked in will be accumulated in this list until the build is successful. All well, but there uis a dimension missing here, regarding to releases. Often you can run several release builds until you actually deploy the result of the build to a test or production system. When you do this, wouldn’t it be nice to be able to send the customer a nice release note that contain all work items and changeset since the previously deployed version? At our company, we have developed a Release Repository, which basically is a siple web site with a SQL database as storage. Every time we run a Release Build, the resulting installers, zip-files, sql scripts etc, gets pushed into the release repositor together with the relevant build information. This information contains things such as start time, who triggered the build etc. Also, it contains the associated changesets and work items. When deploying the MSI’s for a new version, we mark the build as Deployed in the release repository. The depoyed status is stored in the release repository database, but it could also have been implemented by setting the Build Quality for that build to Deployed. When generating the release notes, the web site simple runs through each release build back to the previous build that was marked as Deplyed, and aggregates the work items and changesets: Here is a sample screenshot on how this looks for a sample build/application The web site is available both for us and also for the customers and testers, which means that they can easily get the latest version of a particular application and at the same time see what changes are included in this version. There is a lot going on in the Release Build Process that drives this in our TFS 2010 server, but in this post I will show how you can access and read the changeset and work item information in a custom activity. Since Team Build associates changesets and work items for each build, this information is (partially) available inside the build process template. The Associate Changesets and Work Items for non-Shelveset Builds activity (located inside the Try  Compile, Test, and Associate Changesets and Work Items activity) defines and populates a variable called associatedWorkItems   You can see that this variable is an IList containing instances of the Changeset class (from the Microsoft.TeamFoundation.VersionControl.Client namespace). Now, if you want to access this variable later on in the build process template, you need to declare a new variable in the corresponding scope and the assign the value to this variable. In this sample, I declared a variable called assocChangesets in the RunAgent sequence, which basically covers the whol compile, test and drop part of the build process:   Now, you need to assign the value from the AssociatedChangesets to this variable. This is done using the Assign workflow activity:   Now you can add a custom activity any where inside the RunAgent sequence and use this variable. NB: Of course your activity must place somewhere after the variable has been poplated. To finish off, here is code snippet that shows how you can read the changeset and work item information from the variable.   First you add an InArgumet on your activity where you can pass i the variable that we defined. [RequiredArgument] public InArgument<IList<Changeset>> AssociatedChangesets { get; set; } Then you can traverse all the changesets in the list, and for each changeset use the WorkItems property to get the work items that were associated in that changeset: foreach (Changeset ch in associatedChangesets) { // Add change theChangesets.Add( new AssociatedChangeset(ch.ChangesetId, ch.ArtifactUri, ch.Committer, ch.Comment, ch.ChangesetId)); foreach (var wi in ch.WorkItems) { theWorkItems.Add( new AssociatedWorkItem(wi["System.AssignedTo"].ToString(), wi.Id, wi["System.State"].ToString(), wi.Title, wi.Type.Name, wi.Id, wi.Uri)); } } NB: AssociatedChangeset and AssociatedWorkItem are custom classes that we use internally for storing this information that is eventually pushed to the release repository.

    Read the article

  • and the winner is Google Chrome

    - by anirudha
    Browser war really still uncompleted but here i tell that Why Google chrome better. 1. Easy to install:- as IE 9 Google chrome not force user to purchase a new OS. the chrome have a facelity that they install in minutes then less then other just like Firefox a another competitor or bloody fool  IE 9. 2. Easy to test: if you want to test their beta that’s no problem as well as Firefox. if user use Firefox 4 beta that they found that they can’t use many good plugin such as a big list the Web Developer tool and many other are one of them. in Chrome beta they provide you more then the last official release of chrome. 3. Google chrome Sync:-  i myself used  sync inside Firefox but nothing i found good and from a long time i feel nothing good and any feature in Firefox sync. but in google chrome their sync system is much better. When user login for sync in chrome they install everything and get back the user every settings they set the last time such as apps, autofill, bookmark ,extensions preference and theme. if you want to check bookmark from other browser that you can use google docs because google provided their bookmark backup in their docs account they have. performance:- after testing a website i found that a website open in 36 seconds in Firefox that Google chrome open them in 10 seconds. i found a interesting thing that when i test offline in IE 8 they show me in one or two seconds. i wonder how it’s possible after a long puzzle i found that IE was integrated software from Microsoft that the both software Visual studio and IE was integrated with windows. if user  test javascript in IE that the error they find show in visual studio not in IE as well as other software like chrome and IE. chrome not have a vast range of plugin as well as firefox so developer spent less time on chrome that would be a problem of future of chrome. interface comparison : the chrome have a common but user friendly interface then the user easily can use them. are you watching menu in Firefox 4. they make them complex as well as whole software IE 9. IE developer team thing that they can make everything fool by making a slogan HTML 5 inside IE. if anyone want to open a page in IE 9 that they show after some second. some time they show page not found even site is not gone wrong. when anyone want to use IE 9 developer tool that they thing that “ are this really  a developer tool ? ”. yeah they not make them for human as well as Firebug working team make firebug inside Firefox. they thing that how they can make public fool. Are you see that if you want to install Visual studio they force you to install sql server even you use other database system. a big stupidity of their tool can be found here today we hear that they Microsoft launched silverlight 5. are you know how Microsoft make silverlight yeah he copycat the idea of Adobe and their product Adobe Flash. that’s a other matter we can use .Net language instead of actionscript , lingo or shockwave.

    Read the article

  • Subterranean IL: Compiling C# exception handlers

    - by Simon Cooper
    An exception handler in C# combines the IL catch and finally exception handling clauses into a single try statement: try { Console.WriteLine("Try block") // ... } catch (IOException) { Console.WriteLine("IOException catch") // ... } catch (Exception e) { Console.WriteLine("Exception catch") // ... } finally { Console.WriteLine("Finally block") // ... } How does this get compiled into IL? Initial implementation If you remember from my earlier post, finally clauses must be specified with their own .try clause. So, for the initial implementation, we take the try/catch/finally, and simply split it up into two .try clauses (I have to use label syntax for this): StartTry: ldstr "Try block" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End EndTry: StartIOECatch: ldstr "IOException catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End EndIOECatch: StartECatch: ldstr "Exception catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End EndECatch: StartFinally: ldstr "Finally block" call void [mscorlib]System.Console::WriteLine(string) // ... endfinally EndFinally: End: // ... .try StartTry to EndTry catch [mscorlib]System.IO.IOException handler StartIOECatch to EndIOECatch catch [mscorlib]System.Exception handler StartECatch to EndECatch .try StartTry to EndTry finally handler StartFinally to EndFinally However, the resulting program isn't verifiable, and doesn't run: [IL]: Error: Shared try has finally or fault handler. Nested try blocks What's with the verification error? Well, it's a condition of IL verification that all exception handling regions (try, catch, filter, finally, fault) of a single .try clause have to be completely contained within any outer exception region, and they can't overlap with any other exception handling clause. In other words, IL exception handling clauses must to be representable in the scoped syntax, and in this example, we're overlapping catch and finally clauses. Not only is this example not verifiable, it isn't semantically correct. The finally handler is specified round the .try. What happens if you were able to run this code, and an exception was thrown? Program execution enters top of try block, and exception is thrown within it CLR searches for an exception handler, finds catch Because control flow is leaving .try, finally block is run The catch block is run leave.s End inside the catch handler branches to End label. We're actually running the finally before the catch! What we do about it What we actually need to do is put the catch clauses inside the finally clause, as this will ensure the finally gets executed at the correct time (this time using scoped syntax): .try { .try { ldstr "Try block" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End } catch [mscorlib]System.IO.IOException { ldstr "IOException catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End } catch [mscorlib]System.Exception { ldstr "Exception catch" call void [mscorlib]System.Console::WriteLine(string) // ... leave.s End } } finally { ldstr "Finally block" call void [mscorlib]System.Console::WriteLine(string) // ... endfinally } End: ret Returning from methods There is a further semantic mismatch that the C# compiler has to deal with; in C#, you are allowed to return from within an exception handling block: public int HandleMethod() { try { // ... return 0; } catch (Exception) { // ... return -1; } } However, you can't ret inside an exception handling block in IL. So the C# compiler does a leave.s to a ret outside the exception handling area, loading/storing any return value to a local variable along the way (as leave.s clears the stack): .method public instance int32 HandleMethod() { .locals init ( int32 retVal ) .try { // ... ldc.i4.0 stloc.0 leave.s End } catch [mscorlib]System.Exception { // ... ldc.i4.m1 stloc.0 leave.s End } End: ldloc.0 ret } Conclusion As you can see, the C# compiler has quite a few hoops to jump through to translate C# code into semantically-correct IL, and hides the numerous conditions on IL exception handling blocks from the C# programmer. Next up: catch-all blocks, and how the runtime deals with non-Exception exceptions.

    Read the article

  • Sneak Peak: Social Developer Program at JavaOne

    - by Mike Stiles
    By guest blogger Roland Smart We're just days away from what is gunning to be the most exciting installment of OpenWorld to date, so how about an exciting sneak peak at the very first Social Developer Program? If your first thought is, "What's a social developer?" you're not alone. It’s an emerging term and one we think will gain prominence as social experiences become more prevalent in enterprise applications. For those who keep an eye on the ever-evolving Facebook platform, you'll recall that they recently rebranded their PDC (preferred developer consultant) group as the PMD (preferred marketing developer), signaling the importance of development resources inside the marketing organization to unlock the potential of social. The marketing developer they're referring to could be considered a social developer in a broader context. While it's true social has really blossomed in the marketing context and CMOs are winning more and more technical resources, social is starting to work its way more deeply into the enterprise with the help of developers that work outside marketing. Developers, like the rest of us, have fallen in "like" with social functionality and are starting to imagine how social can transform enterprise applications in the way it has consumer-facing experiences. The thesis of my presentation is that social developers will take many pages from the marketing playbook as they apply social inside the enterprise. To support this argument, lets walk through a range of enterprise applications and explore how consumer-facing social experiences might be interpreted in this context. Here's one example of how a social experience could be integrated into a sales enablement application. As a marketer, I spend a great deal of time collaborating with my sales colleagues, so I have good insight into their working process. While at Involver, we grew our sales team quickly, and it became evident some of our processes broke with scale. For example, we used to have weekly team meetings at which we'd discuss what was working and what wasn't from a messaging perspective. One aspect of these sessions focused on "objections" and "responses," where the salespeople would walk through common objections to purchasing and share appropriate responses. We tried to map each context to best answers and we'd capture these on a wiki page. As our team grew, however, participation at scale just wasn't tenable, and our wiki pages quickly lost their freshness. Imagine giving salespeople a place where they could submit common objections and responses for their colleagues to see, sort, comment on, and vote on. What you'd get is an up-to-date and relevant repository of information. And, if you supported an application like this with a social graph, it would be possible to make good recommendations to individual sales people about the objections they'd likely hear based on vertical, product, region or other graph data. Taking it even further, you could build in a badging/game element to reward those salespeople who participate the most. Both these examples are based on proven models at work inside consumer-facing applications. If you want to learn about how HR, Operations, Product Development and Customer Support can leverage social experiences, you’re welcome to join us at JavaOne or join our Social Developer Community to find some of the presentations after OpenWorld.

    Read the article

  • Is This Your Idea of Disaster Recovery?

    - by rickramsey
    Don't just make do with less. Protect what you've got. By, for instance, deploying Oracle Solaris 10 inside a zone cluster. "Wait," you say, "what is a zone cluster?" It is a zone deployed across different physical servers. "Who would do that!" you ask in a mild panic. Why, an upstanding sysadmin citizen interested in protecting his or her employer's investment with appropriate high availability and disaster recovery. If one server gets wiped out by Hurricane Sandy along with pretty much the entire East Coast of the USA, your zone continues to run on the other server(s). Provided you set them up in Edinburgh. This white paper (pdf) explains what a zone cluster is and how to use it. If a white paper reminds you of having to read War and Peace in school, just use this Oracle RAC and Solaris Cluster Cheat Sheet, instead. "But wait!" you exclaim. "I didn't realize Solaris 10 offered zone clusters!" I didn't, either! And in an earlier version of this blog post I said that zone clusters were only available with Oracle Solaris 11. But Karoly Vegh pointed me to the documentation for Oracle Solaris Cluster 3.3, which explains how to manage zone clusters in Oracle Solaris 10. Bite my fist! So, the point I was trying to make is not just that you can run Oracle Solaris 10 zone clusters, but that you can run them in an Oracle Solaris 11 environment. Now let's return to our conversation and pick up where we left off ... "Oh no! Whatever shall I do?" Fear not. Remember how Oracle Solaris 11 lets you create a Solaris 10 branded zone inside a system running Oracle Solaris 11? Well, the Solaris Cluster engineers thought that was a bang-up idea, and decided to extend Oracle Solaris Cluster so that you could run your Solaris 10 applications inside the protective cocoon of an Oracle Solaris 11 zone cluster. Take advantage of the installation improvements and network virtualization capabilities of Oracle Solaris 11 while still running your application on Oracle Solaris 10. You Luddite, you. That capability is in the latest release of Oracle Solaris Cluster, version 4.1, which became available last Friday. "Last Friday! Is it too late to get a copy?" You can still get a free copy from our download center (see below). And, if you'd like to know what other goodies the 4.1 release of Oracle Solaris Cluster provides, see: What's New In Oracle Solaris Cluster 4.1 (pdf) Free download Oracle Solaris Cluster 4.1 (SPARC or x86) Tech Article: How to Upgrade to Oracle Solaris Cluster 4.0, by Tim Read. As always, you can get the latest information about Oracle Solaris Cluster, plus technical how-to articles, documentation, and more from Oracle Solaris Cluster Resource Page for Sysadmins and Developers. And don't forget about the online launch of Oracle Solaris 11.1 and Oracle Solaris Cluster 4.1, scheduled for Nov 7. "I feel so much better, now!" Think nothing of it. That's what we're here for. - Rick Website Newsletter Facebook Twitter

    Read the article

  • The Social Enterprise: Gangnam Style

    - by Mike Stiles
    Are only small and medium businesses able to put social strategies in place, generate consistent, compelling content for customers, and be nimble enough to listen and respond to the social communities they build? Or are enterprise organizations eagerly and effectively adopting social as well? It depends on whom inside the organization you ask. A study from Attensity looked at who “gets” social inside enterprise organizations. The results were unsurprising. Mostly, Generation X and Y employees who came of age with social as part of their lives and as a key communications vehicle understand it. Imagine being a 25-year-old at a company that bans employees from accessing Facebook at work. You may as well tell them they can’t use phones and must do all calculations on an abacus. To them, such policy is absent of real-world logic and signals to them the organization is destined to be the victim of an up-and-comer. After that, it’s senior management that gets social. You don’t get to be in senior management without reading a few things and paying attention. Most senior managers are well aware of the impact social has had and will have, though they may be unsure of what to do about it. The better ones will utilize those on the inside who do inherently know how to communicate and build virtual relationships using social. The very best will get the past out of the way for these social innovators, so the new communications can be enacted minus counterproductive dictums, double-clutching, meeting-creep, and all the other fading internal practices that water down content and impede change. Organizationally, the Attensity study found 81% of enterprise companies believe failing to embrace social will result in their being left behind. Yet our old friend fear still has many captive in its clutches. 79% feel overwhelmed by the volume of social data available, something a social technology partner with goal-oriented analytics expertise could go a long way toward alleviating. Then there’s the fear of social having a negative impact. This comes from a lack of belief in the product, the customer service, or both. The public uses social not to go out and slay brands. They’re using it to be honest. If the fear is that honesty will reflect badly on the brand, the brand has much bigger, broader problems than what happens on Facebook. Sadly, most enterprise organizations still see social as a megaphone, a one-way channel with which to hit people with ads. They either don’t understand social relationships, or don’t want any. The truly unenlightened manager will always say, “We help them by selling them our stuff.” “Brand affinity” is a term, it’s just not one assigned much value in enterprise organizations. Which brings us to Psy, the Korean performer whose Internet video phenom “Gangnam Style,” as of this writing, has been viewed 438,550,238 times on YouTube. It’s bigger than anything a brand will probably ever publish. Most brands would never have seen the point of making or publishing it. But a funny thing happened on the way to Internet success. The video literally doubled the stock price of Psy’s father’s software firm. NH Investment and Securities said, "The positive sentiment has attracted investors just because of the fact the company is owned by Psy's father and uncle.” The company wasn’t mentioned or seen in the video in any way, yet reaped tangible rewards just for being tangentially associated with it. Imagine your brand being visibly and directly responsible for such a smash and tell me it’s worthless. When enterprise organizations embrace the value of igniting passions, making people happier, solving their problems, informing them, helping them have fun, etc., then they will have fully embraced social, and will reap the brand affinity rewards of heightened awareness, brand loyalty and yes, sales.

    Read the article

  • UIViewController presentModalViewController: animated: doing nothing?

    - by ryyst
    Hi, I recently started a project, using Apple's Utility Application example project. In the example project, there's an info button that shows an instance of FlipSideView. If you know the Weather.app, you know what the button acts like. I then changed the MainWindow.xib to contain a scrollview in the middle of the window and a page-control view at the bottom of the window (again, like the Weather.app). The scrollview gets filled with instances of MainView. When I then clicked the info button, the FlipSideView would show, but only in the area that was previously filled by the MainView instance – this means that the page-control view on the bottom of the page still showed when the FlipSideView instance got loaded. So, I thought that I would simply add a UIViewController for the top-most window, which is the one declared inside the AppDelegate created along side with the project. So, I created a subclass of UIViewController, put an instance of it inside MainWindow.xib and connected it's view outlet to the UIWindow declared as window inside the app delegate. I also changed the button's action, so that it know sends a message to the MainWindowController instance. The message does get sent (I checked with NSLog() statements), but the FlipSideView doesn't get shown. Here's the relevant (?) code: FlipsideViewController *controller = [[FlipsideViewController alloc] initWithNibName:@"FlipsideView" bundle:nil]; controller.delegate = self; controller.modalTransitionStyle = UIModalTransitionStyleFlipHorizontal; [self presentModalViewController:controller animated:YES]; [controller release]; Why's this not working? I've uploaded the entire project here for you to be able to see the whole thing. Thanks for help! -- Ry

    Read the article

  • Flash: How to make an embedded video not autoplay

    - by jb
    I'm embedding a Flash video into an HTML and would like the user to have to click it to begin playing. According to the Adobe <object> / <embed> element documentation, there are variety of methods to do this: 1) Add a Flash parameter inside the <object> tag: <param name="play" value="false"> 2) Add the attribute play inside the <embed> tag: <embed ... play="false"> 3) Add the attribute flashvars inside the <embed> tag: <embed ... flashvars="play=false"> Which is awesome. Only ... none of them work for me: http://johnboxall.github.com/test/flash/flash.htm My code looks like this now: <object width="590" height="475"> <param name="movie" value="untitled_skin.swf"> <param name="play" value="false"> <embed src="untitled_skin.swf" width="590" height="475" type="application/x-shockwave-flash" play="false" flashvars="autoplay=false&play=false" menu="false"></embed> </object> Anyone have any ideas? What am I doing wrong?

    Read the article

  • Convert NSMutableArray to string and back

    - by Friendlydeveloper
    Hello, in my current project I'm facing the following problem: The app needs to exchange data with my server, which are stored inside a NSMutableArray on the iPhone. The array holds NSString, NSData and CGPoint values. Now, I thought the easiest way to achieve this, was to convert the array into a properly formatted string, send it to my server and store it inside some mySQL database. At this point I'd like to request my data from my server, receive the string, which represents contents of my array and then actually convert it back into a NSMutablArray. So far, I tried something like this: NSString *myArrayString = [myArray description]; Now I send this string to my server and store it inside my mySQL database. That part works really well. However, when I receive the string from my server, I have trouble converting it back into a NSMutableArray. Is there a method, which can easily convert array description back into an array? Unfortunately I couldn't find anything on that so far. Maybe my way of "serializing" the array is wrong right from the start and there is a smarter way to do this. Any help appreciated. Thanks in advance.

    Read the article

  • Making flexible C# code in MVC2 for Stored Procedures

    - by cc0
    Thanks to Darin Dimitrov's suggestion I got a big step further in understanding good MVC code, but I'm having some problems making it flexible. I implemented Darin's suggested solution, and it works perfectly for single controllers. However I'm having some trouble implementing it with some flexibility. What I'm looking for is this; To be able to make dynamic column names in json Instead of using "Column1: 'value', ..." and "Column2: 'value', ..." inside the json, I'd like to use for example "id: 'value', ..." and "place: 'value' ..." for one stored procedure, and "animal" and "type" in another (inside the json format). To be able to make dynamic amounts of columns dependent on which stored procedure is called Some stored procedures I'll want to read more than 2 rows from, is there a smart way of accomplishing that? To be able to make numeric (floats and integers) rows from the database be presented inside the json without quotes Like this (name and age); { Column1: "John", Column2: 53 }, I would be very grateful for any feedback and suggestions / code examples I can get here. Even imperfect ones.

    Read the article

  • Is there any way to get an ExtJS GridPanel to automatically resize its width, but still be contained

    - by Micah
    I want to include an ExtJS GridPanel inside a larger layout, which in turn must be rendered inside a particular div in some pre-existing HTML that I don't control. From my experiments, it appears that the GridPanel only resizes itself correctly if it's within a Viewport. For instance, with this code the GridPanel automatically resizes: new Ext.Viewport( { layout: 'anchor', items: [ { xtype: 'panel', title: 'foo', layout: 'fit', items: [ { xtype: 'grid', // define the grid here... but if I replace the first three lines with the lines below, it doesn't: new Ext.Panel( { layout: 'anchor', renderTo: 'RenderUntoThisDiv', The trouble is, Viewport always renders directly to the body of the HTML document, and I need to render within a particular div. If there is a way to get the GridPanel to resize itself correctly, despite not being contained in a ViewPort, that would be ideal. If not, if I could get the Viewport to render the elements within the div, I'd be fine with that. All of my ExtJS objects can be contained within the same div. Does anybody know of a way to get a GridPanel to resize itself correctly, but still be contained inside some non-ExtJS-generated HTML?

    Read the article

  • JavaScript - Settting property on Object in Image load function, property not set once outside funct

    - by Sunday Ironfoot
    Sometimes JavaScript doesn't make sense to me, consider the following code that generates a photo mosaic based on x/y tiles. I'm trying to set a .Done property to true once each Mosaic image has been downloaded, but it's always false for some reason, what am I doing wrong? var tileData = []; function generate() { var image = new Image(); image.onload = function() { // Build up the 'tileData' array with tile objects from this Image for (var i = 0; i < tileData.length; i++) { var tile = tileData[i]; var tileImage = new Image(); tileImage.onload = function() { // Do something with this tile Image tile.Done = true; }; tileImage.src = tile.ImageUrl; } }; image.src = 'Penguins.jpg'; tryDisplayMosaic(); } function tryDisplayMosaic() { setTimeout(function() { for (var i = 0; i < tileData.length; i++) { var tile = tileData[i]; if (!tile.Done) { tryDisplayMosaic(); return; } } // If we get here then all the tiles have been downloaded alert('All images downloaded'); }, 2000); } Now for some reason the .Done property on the tile object is always false, even though it is explicitly being set to true inside tileImage.onload = function(). And I can ensure you that this function DOES get called because I've placed an alert() call inside it. Right now it's just stuck inside an infinite loop calling tryDisplayMosaic() constantly. Also if I place a loop just before tryDisplayMosaic() is called, and in that loop I set .Done = true, then .Done property is true and alert('All images downloaded'); will get called.

    Read the article

  • ORM Persistence by Reachability violates Aggregate Root Boundaries?

    - by Johannes Rudolph
    Most common ORMs implement persistence by reachability, either as the default object graph change tracking mechanism or an optional. Persistence by reachability means the ORM will check the aggregate roots object graph and determines wether any objects are (also indirectly) reachable that are not stored inside it's identity map (Linq2Sql) or don't have their identity column set (NHibernate). In NHibernate this corresponds to cascade="save-update", for Linq2Sql it is the only supported mechanism. They do both, however only implement it for the "add" side of things, objects removed from the aggregate roots graph must be marked for deletion explicitly. In a DDD context one would use a Repository per Aggregate Root. Objects inside an Aggregate Root may only hold references to other Aggregate Roots. Due to persistence by reachability it is possible this other root will be inserted in the database event though it's corresponding repository wasn't invoked at all! Consider the following two Aggregate Roots: Contract and Order. Request is part of the Contract Aggregate. The object graph looks like Contract->Request->Order. Each time a Contractor makes a request, a corresponding order is created. As this involves two different Aggregate Roots, this operation is encapsulated by a Service. //Unit Of Work begins Request r = ...; Contract c = ContractRepository.FindSingleByKey(1); Order o = OrderForRequest(r); // creates a new order aggregate r.Order = o; // associates the aggregates c.Request.Add(r); ContractRepository.SaveOrUpdate(c); // OrderAggregate is reachable and will be inserted Since this Operation happens in a Service, I could still invoke the OrderRepository manually, however I wouldn't be forced to!. Persistence by reachability is a very useful feature inside Aggregate Roots, however I see no way to enforce my Aggregate Boundaries.

    Read the article

< Previous Page | 160 161 162 163 164 165 166 167 168 169 170 171  | Next Page >