Search Results

Search found 5589 results on 224 pages for 'rules and constraints'.

Page 113/224 | < Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >

  • Syntax error in INSERT INTO statement in c# oledb?

    - by sameer
    I am having a table called SubMaster_Accounts, which contains 9 fields. In which I want to insert data to some fields and i want to store some other fields as NULL. I tried to write the query using query string, The sql query works perfectly when i insert the data for all the fields, but when i insert data as NULL to some of the fields it shows syntax error in Insert command. The fields which i want to insert as NULL are not constraints. How can i do it? This is my query string. insert into SubMaster_Account ([SMcode], [MSname], [Sname], [Openbalrs], [Openbalrs1], [Openbalmet], [Openbalmet1], [Creditdays], [Sdesc]) values ('" + SMcode + "','" + MSname + "','" + Sname + "'," + Openbalrs + ",'" + Openbalrs1 + "'," + Openbalmet + ",'" + Openbalmet1 + "'," + Creditdays + ",'" + Sdesc + "')

    Read the article

  • Assigning a pointer variable to a const int in C++?

    - by John
    I'm wondering if anyone can explain the following to me: If I write int i = 0; float* pf = i; I get a compile error (gcc 4.2.1): error: invalid conversion from ‘int’ to ‘float*’ Makes sense - they are obviously two completely different types. But if instead I write const int i = 0; float* pf = i; It compiles without error. Why should the 'const' make a difference on the right hand side of the assignment? Isn't part of the idea of the 'const' keyword to be able to enforce type constraints for constant values? Any explanation I have been able to come up with feels kind of bogus. And none of my explanations also explain the fact that const int i = 1; float* pf = i; fails to compile. Can anyone offer an explanation?

    Read the article

  • How can I treat a sequence value like a generated key?

    - by Jeff Knecht
    Here is my situation and my constraints: I am using Java 5, JDBC, and DB2 9.5 My database table contains a BIGINT value which represents the primary key. For various reasons that are too complicated to go into here, the way I insert records into the table is by executing an insert against a VIEW; an INSTEAD OF trigger retrieves the NEXT_VAL from a SEQUENCE and performs the INSERT into the target table. I can change the triggers, but I cannot change the underlying table or the general approach of inserting through the view. I want to retrieve the sequence value from JDBC as if it were a generated key. Question: How can I get access to the value pulled from the SEQUENCE. Is there some message I can fire within DB2 to float this sequence value back to the JDBC driver?

    Read the article

  • How to implement collection with covariance when delegating to another collection for storage?

    - by memelet
    I'm trying to implement a type of SortedMap with extended semantics. I'm trying to delegate to SortedMap as the storage but can't get around the variance constraints: class IntervalMap[A, +B](implicit val ordering: Ordering[A]) //extends ... { var underlying = SortedMap.empty[A, List[B]] } Here is the error I get. I understand why I get the error (I understand variance). What I don't get is how to implement this type of delegation. And yes, the covariance on B is required. error: covariant type B occurs in contravariant position in type scala.collection.immutable.SortedMap[A,List[B]] of parameter of setter underlying_=

    Read the article

  • Grails searchable plugin with hasMany

    - by user2624442
    I am using grails searchable plugin to search my domain classes. However, I cannot yet search by my hasMany (skills and interests) fields even though they are of the simple type String. This is my domain class: class EmpactUser { static searchable = [except: ['dateCreated','password','enabled','accountExpired','accountLocked','passwordExpired']] String username String password boolean enabled = true boolean accountExpired boolean accountLocked boolean passwordExpired String email String firstName String lastName String address String phoneNumber String description byte[] avatar byte[] resume Date dateCreated static hasMany = [ skills : String, interests : String, // each user has the ability to list many skills and interests so that they can be matched with a project. ] static constraints = { username blank: false, unique: true password blank: false email email: true, blank: false firstName blank: false lastName blank: false description nullable: true address nullable: true avatar nullable: true, maxSize: 1024 * 1024 * 10 resume nullable: true, maxSize: 1024 * 1024 * 10 phoneNumber nullable: true, matches: "/[(][+]d{3}[)]d+/", maxSize: 30 } } This is the code I am using to search: def empactUserList = EmpactUser.search( searchQuery, [reload: false, result: "every", defaultOperator: "or"]) Am I missing something? Thanks, Alan.

    Read the article

  • Correct approach to validate attributes of an instance of class

    - by systempuntoout
    Having a simple Python class like this: class Spam(object): __init__(self, description, value): self.description = description self.value = value Which is the correct approach to check these constraints: "description cannot be empty" "value must be greater than zero" Should i: 1.validate data before creating spam object ? 2.check data on __init__ method ? 3.create an is_valid method on Spam class and call it with spam.isValid() ? 4.create an is_valid static method on Spam class and call it with Spam.isValid(description, value) ? 5.check data on setters declaration ? 6.... Could you recommend a well designed\Pythonic\not verbose (on class with many attributes)\elegant approach?

    Read the article

  • Rails Routes Mappings

    - by rdasxy
    I'm a rails newbie, and I have a controller called resource_links that I've mapped to resources: resources :resources, :as => :resource_links, :controller => :resource_links And this works (basically /resources works as /resource_links). However, trying to go to /resources/tags does not work. To get around this, I added more mappings as: match 'resource_links/tag/:tag(.:format)' => 'resource_links#tag', :via => :get, :as => 'resource_links_tagged', :constraints => {:tag => /.*/} match 'resource_links/tags' => 'resource_links#tags', :via => :get, :as => 'resource_links_tags' Is there any way I can get /resources/tags to be mapped to /resource_links/tag?

    Read the article

  • Mysql Constraint problem

    - by Bramjam
    this is my table /* oefenreeks leerplan */ CREATE TABLE leerplan_oefenreeks ( leerplan_oefenreeks_id INT PRIMARY KEY AUTO_INCREMENT NOT NULL, leerplan_id INT NOT NULL, oefenreeks_id INT NOT NULL, plaats INT NOT NULL ); /* fk */ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_leerplan FOREIGN KEY(leerplan_id) REFERENCES leerplan (leerplan_id) ON DELETE CASCADE; ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT fk_leerp_oefenr_oefenreeks FOREIGN KEY(oefenreeks_id) REFERENCES oefenreeks (oefenreeks_id) ON DELETE CASCADE; /* unique s *//*when I execute the nexline, my fk_leerp_oefenr_leerplan constraint vanishes/disappears*/ ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr UNIQUE(leerplan_id, oefenreeks_id); ALTER TABLE leerplan_oefenreeks ADD CONSTRAINT un_leerp_oefenr_plaats UNIQUE(leerplan_id, plaats); when I go and check only 3 constraints exist (fk_leerp_oefenr_leerplan is gone) I don't understand why this happens, plz tell me (if you need more info just ask ;)

    Read the article

  • Is this syntactically correct?

    - by Borrito
    I have code in one of the source file as follows: #include <stdio.h> #include <math.h> int a ; int b = 256 ; int c = 16 ; int d = 4 ; int main() { if ((d <= (b) && (d == ( c / sizeof(a)))) { printf("%d",sizeof(a) ); } return 0; } I have removed the casts and have simplified on the data names. The sizeof(a) can be taken as 4. I want to know if the the if syntax is a valid one and if so why doesn't it execute? PS : I haven't sat down on this for long due to time constraints. Pardon me if you find a childish error in the code.

    Read the article

  • NLog Exception Details Renderer

    - by jtimperley
    Originally posted on: http://geekswithblogs.net/jtimperley/archive/2013/07/28/nlog-exception-details-renderer.aspxI recently switch from Microsoft's Enterprise Library Logging block to NLog.  In my opinion, NLog offers a simpler and much cleaner configuration section with better use of placeholders, complemented by custom variables. Despite this, I found one deficiency in my migration; I had lost the ability to simply render all details of an exception into our logs and notification emails. This is easily remedied by implementing a custom layout renderer. Start by extending 'NLog.LayoutRenderers.LayoutRenderer' and overriding the 'Append' method. using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails";   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { // Todo: Append details to StringBuilder } }   Now that we have a base layout renderer, we simply need to add the formatting logic to add exception details as well as inner exception details. This is done using reflection with some simple filtering for the properties that are already being rendered. I have added an additional 'Register' method, allowing the definition to be registered in code, rather than in configuration files. This complements by 'LogWrapper' class which standardizes writing log entries throughout my applications. using System; using System.Collections.Generic; using System.Linq; using System.Reflection; using System.Text; using NLog; using NLog.Config; using NLog.LayoutRenderers;   [ThreadAgnostic] [LayoutRenderer(Name)] public sealed class ExceptionDetailsRenderer : LayoutRenderer { public const string Name = "exceptiondetails"; private const string _Spacer = "======================================"; private List<string> _FilteredProperties;   private List<string> FilteredProperties { get { if (_FilteredProperties == null) { _FilteredProperties = new List<string> { "StackTrace", "HResult", "InnerException", "Data" }; }   return _FilteredProperties; } }   public bool LogNulls { get; set; }   protected override void Append(StringBuilder builder, LogEventInfo logEvent) { Append(builder, logEvent.Exception, false); }   private void Append(StringBuilder builder, Exception exception, bool isInnerException) { if (exception == null) { return; }   builder.AppendLine();   var type = exception.GetType(); if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception Details:") .AppendLine(_Spacer) .Append("Exception Type: ") .AppendLine(type.ToString());   var bindingFlags = BindingFlags.Instance | BindingFlags.Public; var properties = type.GetProperties(bindingFlags); foreach (var property in properties) { var propertyName = property.Name; var isFiltered = FilteredProperties.Any(filter => String.Equals(propertyName, filter, StringComparison.InvariantCultureIgnoreCase)); if (isFiltered) { continue; }   var propertyValue = property.GetValue(exception, bindingFlags, null, null, null); if (propertyValue == null && !LogNulls) { continue; }   var valueText = propertyValue != null ? propertyValue.ToString() : "NULL"; builder.Append(propertyName) .Append(": ") .AppendLine(valueText); }   AppendStackTrace(builder, exception.StackTrace, isInnerException); Append(builder, exception.InnerException, true); }   private void AppendStackTrace(StringBuilder builder, string stackTrace, bool isInnerException) { if (String.IsNullOrEmpty(stackTrace)) { return; }   builder.AppendLine();   if (isInnerException) { builder.Append("Inner "); }   builder.AppendLine("Exception StackTrace:") .AppendLine(_Spacer) .AppendLine(stackTrace); }   public static void Register() { Type definitionType; var layoutRenderers = ConfigurationItemFactory.Default.LayoutRenderers; if (layoutRenderers.TryGetDefinition(Name, out definitionType)) { return; }   layoutRenderers.RegisterDefinition(Name, typeof(ExceptionDetailsRenderer)); LogManager.ReconfigExistingLoggers(); } } For brevity I have removed the Trace, Debug, Warn, and Fatal methods. They are modelled after the Info methods. As mentioned above, note how the log wrapper automatically registers our custom layout renderer reducing the amount of application configuration required. using System; using NLog;   public static class LogWrapper { static LogWrapper() { ExceptionDetailsRenderer.Register(); }   #region Log Methods   public static void Info(object toLog) { Log(toLog, LogLevel.Info); }   public static void Info(string messageFormat, params object[] parameters) { Log(messageFormat, parameters, LogLevel.Info); }   public static void Error(object toLog) { Log(toLog, LogLevel.Error); }   public static void Error(string message, Exception exception) { Log(message, exception, LogLevel.Error); }   private static void Log(string messageFormat, object[] parameters, LogLevel logLevel) { string message = parameters.Length == 0 ? messageFormat : string.Format(messageFormat, parameters); Log(message, (Exception)null, logLevel); }   private static void Log(object toLog, LogLevel logLevel, LogType logType = LogType.General) { if (toLog == null) { throw new ArgumentNullException("toLog"); }   if (toLog is Exception) { var exception = toLog as Exception; Log(exception.Message, exception, logLevel, logType); } else { var message = toLog.ToString(); Log(message, null, logLevel, logType); } }   private static void Log(string message, Exception exception, LogLevel logLevel, LogType logType = LogType.General) { if (exception == null && String.IsNullOrEmpty(message)) { return; }   var logger = GetLogger(logType); // Note: Using the default constructor doesn't set the current date/time var logInfo = new LogEventInfo(logLevel, logger.Name, message); logInfo.Exception = exception; logger.Log(logInfo); }   private static Logger GetLogger(LogType logType) { var loggerName = logType.ToString(); return LogManager.GetLogger(loggerName); }   #endregion   #region LogType private enum LogType { General } #endregion } The following configuration is similar to what is provided for each of my applications. The 'application' variable is all that differentiates the various applications in all of my environments, the rest has been standardized. Depending on your needs to tweak this configuration while developing and debugging, this section could easily be pushed back into code similar to the registering of our custom layout renderer.   <?xml version="1.0"?>   <configuration> <configSections> <section name="nlog" type="NLog.Config.ConfigSectionHandler, NLog"/> </configSections> <nlog xmlns="http://www.nlog-project.org/schemas/NLog.xsd" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"> <variable name="application" value="Example"/> <targets> <target type="EventLog" name="EventLog" source="${application}" log="${application}" layout="${message}${onexception: ${newline}${exceptiondetails}}"/> <target type="Mail" name="Email" smtpServer="smtp.example.local" from="[email protected]" to="[email protected]" subject="(${machinename}) ${application}: ${level}" body="Machine: ${machinename}${newline}Timestamp: ${longdate}${newline}Level: ${level}${newline}Message: ${message}${onexception: ${newline}${exceptiondetails}}"/> </targets> <rules> <logger name="*" minlevel="Debug" writeTo="EventLog" /> <logger name="*" minlevel="Error" writeTo="Email" /> </rules> </nlog> </configuration>   Now go forward, create your custom exceptions without concern for including their custom properties in your exception logs and notifications.

    Read the article

  • LINQ to SQL and missing Many to Many EntityRefs

    - by Rick Strahl
    Ran into an odd behavior today with a many to many mapping of one of my tables in LINQ to SQL. Many to many mappings aren’t transparent in LINQ to SQL and it maps the link table the same way the SQL schema has it when creating one. In other words LINQ to SQL isn’t smart about many to many mappings and just treats it like the 3 underlying tables that make up the many to many relationship. Iain Galloway has a nice blog entry about Many to Many relationships in LINQ to SQL. I can live with that – it’s not really difficult to deal with this arrangement once mapped, especially when reading data back. Writing is a little more difficult as you do have to insert into two entities for new records, but nothing that can’t be handled in a small business object method with a few lines of code. When I created a database I’ve been using to experiment around with various different OR/Ms recently I found that for some reason LINQ to SQL was completely failing to map even to the linking table. As it turns out there’s a good reason why it fails, can you spot it below? (read on :-}) Here is the original database layout: There’s an items table, a category table and a link table that holds only the foreign keys to the Items and Category tables for a typical M->M relationship. When these three tables are imported into the model the *look* correct – I do get the relationships added (after modifying the entity names to strip the prefix): The relationship looks perfectly fine, both in the designer as well as in the XML document: <Table Name="dbo.wws_Item_Categories" Member="ItemCategories"> <Type Name="ItemCategory"> <Column Name="ItemId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Column Name="CategoryId" Type="System.Guid" DbType="uniqueidentifier NOT NULL" CanBeNull="false" /> <Association Name="ItemCategory_Category" Member="Categories" ThisKey="CategoryId" OtherKey="Id" Type="Category" /> <Association Name="Item_ItemCategory" Member="Item" ThisKey="ItemId" OtherKey="Id" Type="Item" IsForeignKey="true" /> </Type> </Table> <Table Name="dbo.wws_Categories" Member="Categories"> <Type Name="Category"> <Column Name="Id" Type="System.Guid" DbType="UniqueIdentifier NOT NULL" IsPrimaryKey="true" IsDbGenerated="true" CanBeNull="false" /> <Column Name="ParentId" Type="System.Guid" DbType="UniqueIdentifier" CanBeNull="true" /> <Column Name="CategoryName" Type="System.String" DbType="NVarChar(150)" CanBeNull="true" /> <Column Name="CategoryDescription" Type="System.String" DbType="NVarChar(MAX)" CanBeNull="true" /> <Column Name="tstamp" AccessModifier="Internal" Type="System.Data.Linq.Binary" DbType="rowversion" CanBeNull="true" IsVersion="true" /> <Association Name="ItemCategory_Category" Member="ItemCategory" ThisKey="Id" OtherKey="CategoryId" Type="ItemCategory" IsForeignKey="true" /> </Type> </Table> However when looking at the code generated these navigation properties (also on Item) are completely missing: [global::System.Data.Linq.Mapping.TableAttribute(Name="dbo.wws_Item_Categories")] [global::System.Runtime.Serialization.DataContractAttribute()] public partial class ItemCategory : Westwind.BusinessFramework.EntityBase { private System.Guid _ItemId; private System.Guid _CategoryId; public ItemCategory() { } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_ItemId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=1)] public System.Guid ItemId { get { return this._ItemId; } set { if ((this._ItemId != value)) { this._ItemId = value; } } } [global::System.Data.Linq.Mapping.ColumnAttribute(Storage="_CategoryId", DbType="uniqueidentifier NOT NULL")] [global::System.Runtime.Serialization.DataMemberAttribute(Order=2)] public System.Guid CategoryId { get { return this._CategoryId; } set { if ((this._CategoryId != value)) { this._CategoryId = value; } } } } Notice that the Item and Category association properties which should be EntityRef properties are completely missing. They’re there in the model, but the generated code – not so much. So what’s the problem here? The problem – it appears – is that LINQ to SQL requires primary keys on all entities it tracks. In order to support tracking – even of the link table entity – the link table requires a primary key. Real obvious ain’t it, especially since the designer happily lets you import the table and even shows the relationship and implicitly the related properties. Adding an Id field as a Pk to the database and then importing results in this model layout: which properly generates the Item and Category properties into the link entity. It’s ironic that LINQ to SQL *requires* the PK in the middle – the Entity Framework requires that a link table have *only* the two foreign key fields in a table in order to recognize a many to many relation. EF actually handles the M->M relation directly without the intermediate link entity unlike LINQ to SQL. [updated from comments – 12/24/2009] Another approach is to set up both ItemId and CategoryId in the database which shows up in LINQ to SQL like this: This also work in creating the Category and Item fields in the ItemCategory entity. Ultimately this is probably the best approach as it also guarantees uniqueness of the keys and so helps in database integrity. It took me a while to figure out WTF was going on here – lulled by the designer to think that the properties should be when they were not. It’s actually a well documented feature of L2S that each entity in the model requires a Pk but of course that’s easy to miss when the model viewer shows it to you and even the underlying XML model shows the Associations properly. This is one of the issue with L2S of course – you have to play by its rules and once you hit one of those rules there’s no way around them – you’re stuck with what it requires which in this case meant changing the database.© Rick Strahl, West Wind Technologies, 2005-2010Posted in ADO.NET  LINQ  

    Read the article

  • Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 1

    - by rajbk
    The Open Data Protocol, referred to as OData, is a new data-sharing standard that breaks down silos and fosters an interoperative ecosystem for data consumers (clients) and producers (services) that is far more powerful than currently possible. It enables more applications to make sense of a broader set of data, and helps every data service and client add value to the whole ecosystem. WCF Data Services (previously known as ADO.NET Data Services), then, was the first Microsoft technology to support the Open Data Protocol in Visual Studio 2008 SP1. It provides developers with client libraries for .NET, Silverlight, AJAX, PHP and Java. Microsoft now also supports OData in SQL Server 2008 R2, Windows Azure Storage, Excel 2010 (through PowerPivot), and SharePoint 2010. Many other other applications in the works. * This post walks you through how to create an OData feed, define a shape for the data and pre-filter the data using Visual Studio 2010, WCF Data Services and the Entity Framework. A sample project is attached at the bottom of Part 2 of this post. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Create the Web Application File –› New –› Project, Select “ASP.NET Empty Web Application” Add the Entity Data Model Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “ADO.NET Entity Data Model” under "Data”. Name the Model “Northwind” and click “Add”.   In the “Choose Model Contents”, select “Generate Model From Database” and click “Next”   Define a connection to your database containing the Northwind database in the next screen. We are going to expose the Products table through our OData feed. Select “Products” in the “Choose your Database Object” screen.   Click “Finish”. We are done creating our Entity Data Model. Save the Northwind.edmx file created. Add the WCF Data Service Right click on the Web Application in the Solution Explorer and select “Add New Item..” Select “WCF Data Service” from the list and call the service “DataService” (creative, huh?). Click “Add”.   Enable Access to the Data Service Open the DataService.svc.cs class. The class is well commented and instructs us on the next steps. public class DataService : DataService< /* TODO: put your data source class name here */ > { // This method is called only once to initialize service-wide policies. public static void InitializeService(DataServiceConfiguration config) { // TODO: set rules to indicate which entity sets and service operations are visible, updatable, etc. // Examples: // config.SetEntitySetAccessRule("MyEntityset", EntitySetRights.AllRead); // config.SetServiceOperationAccessRule("MyServiceOperation", ServiceOperationRights.All); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } Replace the comment that starts with “/* TODO:” with “NorthwindEntities” (the entity container name of the Model we created earlier).  WCF Data Services is initially locked down by default, FTW! No data is exposed without you explicitly setting it. You have explicitly specify which Entity sets you wish to expose and what rights are allowed by using the SetEntitySetAccessRule. The SetServiceOperationAccessRule on the other hand sets rules for a specified operation. Let us define an access rule to expose the Products Entity we created earlier. We use the EnititySetRights.AllRead since we want to give read only access. Our modified code is shown below. public class DataService : DataService<NorthwindEntities> { public static void InitializeService(DataServiceConfiguration config) { config.SetEntitySetAccessRule("Products", EntitySetRights.AllRead); config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2; } } We are done setting up our ODataFeed! Compile your project. Right click on DataService.svc and select “View in Browser” to see the OData feed. To view the feed in IE, you must make sure that "Feed Reading View" is turned off. You set this under Tools -› Internet Options -› Content tab.   If you navigate to “Products”, you should see the Products feed. Note also that URIs are case sensitive. ie. Products work but products doesn’t.   Filtering our data OData has a set of system query operations you can use to perform common operations against data exposed by the model. For example, to see only Products in CategoryID 2, we can use the following request: /DataService.svc/Products?$filter=CategoryID eq 2 At the time of this writing, supported operations are $orderby, $top, $skip, $filter, $expand, $format†, $select, $inlinecount. Pre-filtering our data using Query Interceptors The Product feed currently returns all Products. We want to change that so that it contains only Products that have not been discontinued. WCF introduces the concept of interceptors which allows us to inject custom validation/policy logic into the request/response pipeline of a WCF data service. We will use a QueryInterceptor to pre-filter the data so that it returns only Products that are not discontinued. To create a QueryInterceptor, write a method that returns an Expression<Func<T, bool>> and mark it with the QueryInterceptor attribute as shown below. [QueryInterceptor("Products")] public Expression<Func<Product, bool>> OnReadProducts() { return o => o.Discontinued == false; } Viewing the feed after compilation will only show products that have not been discontinued. We also confirm this by looking at the WHERE clause in the SQL generated by the entity framework. SELECT [Extent1].[ProductID] AS [ProductID], ... ... [Extent1].[Discontinued] AS [Discontinued] FROM [dbo].[Products] AS [Extent1] WHERE 0 = [Extent1].[Discontinued] Other examples of Query/Change interceptors can be seen here including an example to filter data based on the identity of the authenticated user. We are done pre-filtering our data. In the next part of this post, we will see how to shape our data. Pre-filtering and shaping OData feeds using WCF Data Services and the Entity Framework - Part 2 Foot Notes * http://msdn.microsoft.com/en-us/data/aa937697.aspx † $format did not work for me. The way to get a Json response is to include the following in the  request header “Accept: application/json, text/javascript, */*” when making the request. This is easily done with most JavaScript libraries.

    Read the article

  • The new workflow management of Oracle´s Hyperion Planning: Define more details with Planning Unit Hierarchies and Promotional Paths

    - by Alexandra Georgescu
    After having been almost unchanged for several years, starting with the 11.1.2 release of Oracle´s Hyperion Planning the Process Management has not only got a new name: “Approvals” now is offering the possibility to further split Planning Units (comprised of a unique Scenario-Version-Entity combination) into more detailed combinations along additional secondary dimensions, a so called Planning Unit Hierarchy, and also to pre-define a path of planners, reviewers and approvers, called Promotional Path. I´d like to introduce you to changes and enhancements in this new process management and arouse your curiosity for checking out more details on it. One reason of using the former process management in Planning was to limit data entry rights to one person at a time based on the assignment of a planning unit. So the lowest level of granularity for this assignment was, for a given Scenario-Version combination, the individual entity. Even if in many cases one person wasn´t responsible for all data being entered into that entity, but for only part of it, it was not possible to split the ownership along another additional dimension, for example by assigning ownership to different accounts at the same time. By defining a so called Planning Unit Hierarchy (PUH) in Approvals this gap is now closed. Complementing new Shared Services roles for Planning have been created in order to manage set up and use of Approvals: The Approvals Administrator consisting of the following roles: Approvals Ownership Assigner, who assigns owners and reviewers to planning units for which Write access is assigned (including Planner responsibilities). Approvals Supervisor, who stops and starts planning units and takes any action on planning units for which Write access is assigned. Approvals Process Designer, who can modify planning unit hierarchy secondary dimensions and entity members for which Write access is assigned, can also modify scenarios and versions that are assigned to planning unit hierarchies and can edit validation rules on data forms for which access is assigned. (this includes as well Planner and Ownership Assigner responsibilities) Set up of a Planning Unit Hierarchy is done under the Administration menu, by selecting Approvals, then Planning Unit Hierarchy. Here you create new PUH´s or edit existing ones. The following window displays: After providing a name and an optional description, a pre-selection of entities can be made for which the PUH will be defined. Available options are: All, which pre-selects all entities to be included for the definitions on the subsequent tabs None, manual entity selections will be made subsequently Custom, which offers the selection for an ancestor and the relative generations, that should be included for further definitions. Finally a pattern needs to be selected, which will determine the general flow of ownership: Free-form, uses the flow/assignment of ownerships according to Planning releases prior to 11.1.2 In Bottom-up, data input is done at the leaf member level. Ownership follows the hierarchy of approval along the entity dimension, including refinements using a secondary dimension in the PUH, amended by defined additional reviewers in the promotional path. Distributed, uses data input at the leaf level, while ownership starts at the top level and then is distributed down the organizational hierarchy (entities). After ownership reaches the lower levels, budgets are submitted back to the top through the approval process. Proceeding to the next step, now a secondary dimension and the respective members from that dimension might be selected, in order to create more detailed combinations underneath each entity. After selecting the Dimension and a Parent Member, the definition of a Relative Generation below this member assists in populating the field for Selected Members, while the Count column shows the number of selected members. For refining this list, you might click on the icon right beside the selected member field and use the check-boxes in the appearing list for deselecting members. -------------------------------------------------------------------------------------------------------- TIP: In order to reduce maintenance of the PUH due to changes in the dimensions included (members added, moved or removed) you should consider to dynamically link those dimensions in the PUH with the dimension hierarchies in the planning application. For secondary dimensions this is done using the check-boxes in the Auto Include column. For the primary dimension, the respective selection criteria is applied by right-clicking the name of an entity activated as planning unit, then selecting an item of the shown list of include or exclude options (children, descendants, etc.). Anyway in order to apply dimension changes impacting the PUH a synchronization must be run. If this is really necessary or not is shown on the first screen after selecting from the menu Administration, then Approvals, then Planning Unit Hierarchy: under Synchronized you find the statuses Yes, No or Locked, where the last one indicates, that another user is just changing or synchronizing the PUH. Select one of the not synchronized PUH´s (status No) and click the Synchronize option in order to execute. -------------------------------------------------------------------------------------------------------- In the next step owners and reviewers are assigned to the PUH. Using the icons with the magnifying glass right besides the columns for Owner and Reviewer the respective assignments can be made in the ordermthat you want them to review the planning unit. While it is possible to assign only one owner per entity or combination of entity+ member of the secondary dimension, the selection for reviewers might consist of more than one person. The complete Promotional Path, including the defined owners and reviewers for the entity parents, can be shown by clicking the icon. In addition optional users might be defined for being notified about promotions for a planning unit. -------------------------------------------------------------------------------------------------------- TIP: Reviewers cannot change data, but can only review data according to their data access permissions and reject or promote planning units. -------------------------------------------------------------------------------------------------------- In order to complete your PUH definitions click Finish - this saves the PUH and closes the window. As a final step, before starting the approvals process, you need to assign the PUH to the Scenario-Version combination for which it should be used. From the Administration menu select Approvals, then Scenario and Version Assignment. Expand the PUH in order to see already existing assignments. Under Actions click the add icon and select scenarios and versions to be assigned. If needed, click the remove icon in order to delete entries. After these steps, set up is completed for starting the approvals process. Start, stop and control of the approvals process is now done under the Tools menu, and then Manage Approvals. The new PUH feature is complemented by various additional settings and features; some of them at least should be mentioned here: Export/Import of PUHs: Out of Office agent: Validation Rules changing promotional/approval path if violated (including the use of User-defined Attributes (UDAs)): And various new and helpful reviewer actions with corresponding approval states. About the Author: Bernhard Kinkel started working for Hyperion Solutions as a Presales Consultant and Consultant in 1998 and moved to Hyperion Education Services in 1999. He joined Oracle University in 2007 where he is a Principal Education Consultant. Based on these many years of working with Hyperion products he has detailed product knowledge across several versions. He delivers both classroom and live virtual courses. His areas of expertise are Oracle/Hyperion Essbase, Oracle Hyperion Planning and Hyperion Web Analysis.

    Read the article

  • A Guided Tour of Complexity

    - by JoshReuben
    I just re-read Complexity – A Guided Tour by Melanie Mitchell , protégé of Douglas Hofstadter ( author of “Gödel, Escher, Bach”) http://www.amazon.com/Complexity-Guided-Tour-Melanie-Mitchell/dp/0199798109/ref=sr_1_1?ie=UTF8&qid=1339744329&sr=8-1 here are some notes and links:   Evolved from Cybernetics, General Systems Theory, Synergetics some interesting transdisciplinary fields to investigate: Chaos Theory - http://en.wikipedia.org/wiki/Chaos_theory – small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible. System Dynamics / Cybernetics - http://en.wikipedia.org/wiki/System_Dynamics – study of how feedback changes system behavior Network Theory - http://en.wikipedia.org/wiki/Network_theory – leverage Graph Theory to analyze symmetric  / asymmetric relations between discrete objects Algebraic Topology - http://en.wikipedia.org/wiki/Algebraic_topology – leverage abstract algebra to analyze topological spaces There are limits to deterministic systems & to computation. Chaos Theory definitely applies to training an ANN (artificial neural network) – different weights will emerge depending upon the random selection of the training set. In recursive Non-Linear systems http://en.wikipedia.org/wiki/Nonlinear_system – output is not directly inferable from input. E.g. a Logistic map: Xt+1 = R Xt(1-Xt) Different types of bifurcations, attractor states and oscillations may occur – e.g. a Lorenz Attractor http://en.wikipedia.org/wiki/Lorenz_system Feigenbaum Constants http://en.wikipedia.org/wiki/Feigenbaum_constants express ratios in a bifurcation diagram for a non-linear map – the convergent limit of R (the rate of period-doubling bifurcations) is 4.6692016 Maxwell’s Demon - http://en.wikipedia.org/wiki/Maxwell%27s_demon - the Second Law of Thermodynamics has only a statistical certainty – the universe (and thus information) tends towards entropy. While any computation can theoretically be done without expending energy, with finite memory, the act of erasing memory is permanent and increases entropy. Life & thought is a counter-example to the universe’s tendency towards entropy. Leo Szilard and later Claude Shannon came up with the Information Theory of Entropy - http://en.wikipedia.org/wiki/Entropy_(information_theory) whereby Shannon entropy quantifies the expected value of a message’s information in bits in order to determine channel capacity and leverage Coding Theory (compression analysis). Ludwig Boltzmann came up with Statistical Mechanics - http://en.wikipedia.org/wiki/Statistical_mechanics – whereby our Newtonian perception of continuous reality is a probabilistic and statistical aggregate of many discrete quantum microstates. This is relevant for Quantum Information Theory http://en.wikipedia.org/wiki/Quantum_information and the Physics of Information - http://en.wikipedia.org/wiki/Physical_information. Hilbert’s Problems http://en.wikipedia.org/wiki/Hilbert's_problems pondered whether mathematics is complete, consistent, and decidable (the Decision Problem – http://en.wikipedia.org/wiki/Entscheidungsproblem – is there always an algorithm that can determine whether a statement is true).  Godel’s Incompleteness Theorems http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems  proved that mathematics cannot be both complete and consistent (e.g. “This statement is not provable”). Turing through the use of Turing Machines (http://en.wikipedia.org/wiki/Turing_machine symbol processors that can prove mathematical statements) and Universal Turing Machines (http://en.wikipedia.org/wiki/Universal_Turing_machine Turing Machines that can emulate other any Turing Machine via accepting programs as well as data as input symbols) that computation is limited by demonstrating the Halting Problem http://en.wikipedia.org/wiki/Halting_problem (is is not possible to know when a program will complete – you cannot build an infinite loop detector). You may be used to thinking of 1 / 2 / 3 dimensional systems, but Fractal http://en.wikipedia.org/wiki/Fractal systems are defined by self-similarity & have non-integer Hausdorff Dimensions !!!  http://en.wikipedia.org/wiki/List_of_fractals_by_Hausdorff_dimension – the fractal dimension quantifies the number of copies of a self similar object at each level of detail – eg Koch Snowflake - http://en.wikipedia.org/wiki/Koch_snowflake Definitions of complexity: size, Shannon entropy, Algorithmic Information Content (http://en.wikipedia.org/wiki/Algorithmic_information_theory - size of shortest program that can generate a description of an object) Logical depth (amount of info processed), thermodynamic depth (resources required). Complexity is statistical and fractal. John Von Neumann’s other machine was the Self-Reproducing Automaton http://en.wikipedia.org/wiki/Self-replicating_machine  . Cellular Automata http://en.wikipedia.org/wiki/Cellular_automaton are alternative form of Universal Turing machine to traditional Von Neumann machines where grid cells are locally synchronized with their neighbors according to a rule. Conway’s Game of Life http://en.wikipedia.org/wiki/Conway's_Game_of_Life demonstrates various emergent constructs such as “Glider Guns” and “Spaceships”. Cellular Automatons are not practical because logical ops require a large number of cells – wasteful & inefficient. There are no compilers or general program languages available for Cellular Automatons (as far as I am aware). Random Boolean Networks http://en.wikipedia.org/wiki/Boolean_network are extensions of cellular automata where nodes are connected at random (not to spatial neighbors) and each node has its own rule –> they demonstrate the emergence of complex  & self organized behavior. Stephen Wolfram’s (creator of Mathematica, so give him the benefit of the doubt) New Kind of Science http://en.wikipedia.org/wiki/A_New_Kind_of_Science proposes the universe may be a discrete Finite State Automata http://en.wikipedia.org/wiki/Finite-state_machine whereby reality emerges from simple rules. I am 2/3 through this book. It is feasible that the universe is quantum discrete at the plank scale and that it computes itself – Digital Physics: http://en.wikipedia.org/wiki/Digital_physics – a simulated reality? Anyway, all behavior is supposedly derived from simple algorithmic rules & falls into 4 patterns: uniform , nested / cyclical, random (Rule 30 http://en.wikipedia.org/wiki/Rule_30) & mixed (Rule 110 - http://en.wikipedia.org/wiki/Rule_110 localized structures – it is this that is interesting). interaction between colliding propagating signal inputs is then information processing. Wolfram proposes the Principle of Computational Equivalence - http://mathworld.wolfram.com/PrincipleofComputationalEquivalence.html - all processes that are not obviously simple can be viewed as computations of equivalent sophistication. Meaning in information may emerge from analogy & conceptual slippages – see the CopyCat program: http://cognitrn.psych.indiana.edu/rgoldsto/courses/concepts/copycat.pdf Scale Free Networks http://en.wikipedia.org/wiki/Scale-free_network have a distribution governed by a Power Law (http://en.wikipedia.org/wiki/Power_law - much more common than Normal Distribution). They are characterized by hubs (resilience to random deletion of nodes), heterogeneity of degree values, self similarity, & small world structure. They grow via preferential attachment http://en.wikipedia.org/wiki/Preferential_attachment – tipping points triggered by positive feedback loops. 2 theories of cascading system failures in complex systems are Self-Organized Criticality http://en.wikipedia.org/wiki/Self-organized_criticality and Highly Optimized Tolerance http://en.wikipedia.org/wiki/Highly_optimized_tolerance. Computational Mechanics http://en.wikipedia.org/wiki/Computational_mechanics – use of computational methods to study phenomena governed by the principles of mechanics. This book is a great intuition pump, but does not cover the more mathematical subject of Computational Complexity Theory – http://en.wikipedia.org/wiki/Computational_complexity_theory I am currently reading this book on this subject: http://www.amazon.com/Computational-Complexity-Christos-H-Papadimitriou/dp/0201530821/ref=pd_sim_b_1   stay tuned for that review!

    Read the article

  • What's up with LDoms: Part 4 - Virtual Networking Explained

    - by Stefan Hinker
    I'm back from my summer break (and some pressing business that kept me away from this), ready to continue with Oracle VM Server for SPARC ;-) In this article, we'll have a closer look at virtual networking.  Basic connectivity as we've seen it in the first, simple example, is easy enough.  But there are numerous options for the virtual switches and virtual network ports, which we will discuss in more detail now.   In this section, we will concentrate on virtual networking - the capabilities of virtual switches and virtual network ports - only.  Other options involving hardware assignment or redundancy will be covered in separate sections later on. There are two basic components involved in virtual networking for LDoms: Virtual switches and virtual network devices.  The virtual switch should be seen just like a real ethernet switch.  It "runs" in the service domain and moves ethernet packets back and forth.  A virtual network device is plumbed in the guest domain.  It corresponds to a physical network device in the real world.  There, you'd be plugging a cable into the network port, and plug the other end of that cable into a switch.  In the virtual world, you do the same:  You create a virtual network device for your guest and connect it to a virtual switch in a service domain.  The result works just like in the physical world, the network device sends and receives ethernet packets, and the switch does all those things ethernet switches tend to do. If you look at the reference manual of Oracle VM Server for SPARC, there are numerous options for virtual switches and network devices.  Don't be confused, it's rather straight forward, really.  Let's start with the simple case, and work our way to some more sophisticated options later on.  In many cases, you'll want to have several guests that communicate with the outside world on the same ethernet segment.  In the real world, you'd connect each of these systems to the same ethernet switch.  So, let's do the same thing in the virtual world: root@sun # ldm add-vsw net-dev=nxge2 admin-vsw primary root@sun # ldm add-vnet admin-net admin-vsw mars root@sun # ldm add-vnet admin-net admin-vsw venus We've just created a virtual switch called "admin-vsw" and connected it to the physical device nxge2.  In the physical world, we'd have powered up our ethernet switch and installed a cable between it and our big enterprise datacenter switch.  We then created a virtual network interface for each one of the two guest systems "mars" and "venus" and connected both to that virtual switch.  They can now communicate with each other and with any system reachable via nxge2.  If primary were running Solaris 10, communication with the guests would not be possible.  This is different with Solaris 11, please see the Admin Guide for details.  Note that I've given both the vswitch and the vnet devices some sensible names, something I always recommend. Unless told otherwise, the LDoms Manager software will automatically assign MAC addresses to all network elements that need one.  It will also make sure that these MAC addresses are unique and reuse MAC addresses to play nice with all those friendly DHCP servers out there.  However, if we want to do this manually, we can also do that.  (One reason might be firewall rules that work on MAC addresses.)  So let's give mars a manually assigned MAC address: root@sun # ldm set-vnet mac-addr=0:14:4f:f9:c4:13 admin-net mars Within the guest, these virtual network devices have their own device driver.  In Solaris 10, they'd appear as "vnet0".  Solaris 11 would apply it's usual vanity naming scheme.  We can configure these interfaces just like any normal interface, give it an IP-address and configure sophisticated routing rules, just like on bare metal.  In many cases, using Jumbo Frames helps increase throughput performance.  By default, these interfaces will run with the standard ethernet MTU of 1500 bytes.  To change this,  it is usually sufficient to set the desired MTU for the virtual switch.  This will automatically set the same MTU for all vnet devices attached to that switch.  Let's change the MTU size of our admin-vsw from the example above: root@sun # ldm set-vsw mtu=9000 admin-vsw primary Note that that you can set the MTU to any value between 1500 and 16000.  Of course, whatever you set needs to be supported by the physical network, too. Another very common area of network configuration is VLAN tagging. This can be a little confusing - my advise here is to be very clear on what you want, and perhaps draw a little diagram the first few times.  As always, keeping a configuration simple will help avoid errors of all kind.  Nevertheless, VLAN tagging is very usefull to consolidate different networks onto one physical cable.  And as such, this concept needs to be carried over into the virtual world.  Enough of the introduction, here's a little diagram to help in explaining how VLANs work in LDoms: Let's remember that any VLANs not explicitly tagged have the default VLAN ID of 1. In this example, we have a vswitch connected to a physical network that carries untagged traffic (VLAN ID 1) as well as VLANs 11, 22, 33 and 44.  There might also be other VLANs on the wire, but the vswitch will ignore all those packets.  We also have two vnet devices, one for mars and one for venus.  Venus will see traffic from VLANs 33 and 44 only.  For VLAN 44, venus will need to configure a tagged interface "vnet44000".  For VLAN 33, the vswitch will untag all incoming traffic for venus, so that venus will see this as "normal" or untagged ethernet traffic.  This is very useful to simplify guest configuration and also allows venus to perform Jumpstart or AI installations over this network even if the Jumpstart or AI server is connected via VLAN 33.  Mars, on the other hand, has full access to untagged traffic from the outside world, and also to VLANs 11,22 and 33, but not 44.  On the command line, we'd do this like this: root@sun # ldm add-vsw net-dev=nxge2 pvid=1 vid=11,22,33,44 admin-vsw primary root@sun # ldm add-vnet admin-net pvid=1 vid=11,22,33 admin-vsw mars root@sun # ldm add-vnet admin-net pvid=33 vid=44 admin-vsw venus Finally, I'd like to point to a neat little option that will make your live easier in all those cases where configurations tend to change over the live of a guest system.  It's the "id=<somenumber>" option available for both vswitches and vnet devices.  Normally, Solaris in the guest would enumerate network devices sequentially.  However, it has ways of remembering this initial numbering.  This is good in the physical world.  In the virtual world, whenever you unbind (aka power off and disassemble) a guest system, remove and/or add network devices and bind the system again, chances are this numbering will change.  Configuration confusion will follow suit.  To avoid this, nail down the initial numbering by assigning each vnet device it's device-id explicitly: root@sun # ldm add-vnet admin-net id=1 admin-vsw venus Please consult the Admin Guide for details on this, and how to decipher these network ids from Solaris running in the guest. Thanks for reading this far.  Links for further reading are essentially only the Admin Guide and Reference Manual and can be found above.  I hope this is useful and, as always, I welcome any comments.

    Read the article

  • Enterprise 2.0 Conference: Building Social Business

    - by kellsey.ruppel
    The way we work is changing rapidly, offering an enormous competitive advantage to those who embrace the new tools that enable contextual, agile and simplified information exchange and collaboration to distributed workforces and networks of partners and customers. As many of you are aware, Enterprise 2.0 is the term for the technologies and business practices that liberate the workforce from the constraints of legacy communication and productivity tools like email. It provides business managers with access to the right information at the right time through a web of inter-connected applications, services and devices. Enterprise 2.0 makes accessible the collective intelligence of many, translating to a huge competitive advantage in the form of increased innovation, productivity and agility. The Enterprise 2.0 Conference takes a strategic perspective, emphasizing the bigger picture implications of the technology and the exploration of what is at stake for organizations trying to change not only tools, but also culture and process. Beyond discussion of the "why", there will also be in-depth opportunities for learning the "how" that will help you bring Enterprise 2.0 to your business.You won't want to miss this opportunity to learn and hear from leading experts in the fields of technology for business, collaboration, culture change and collective intelligence. Oracle is a proud Gold sponsor of the Enterprise 2.0 Conference, taking place this week in Boston. Come and learn about Oracle at the following panel sessions and Market Leaders Theater Sessions. Tuesday, June 19, 2012 at 1:30 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Tuesday, June 19, 2012 at 2:30 p.m.  Panel Session Innovation versus Integration Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 1:30 p.m. Business Leadership Roundtable Oracle Panel Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Wednesday, June 20, 2012 at 3:00 p.m. Market Theater Presentation Into the Activity Stream, and Beyond! Introducing Oracle Social Network Oracle Speaker: Christian Finn, Senior Director of Evangelism, Oracle WebCenter Thursday, June 21, 2012 at 8:30 a.m. Panel Session Collecting and Processing Big Data: Architecting Systems that Scale Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Thursday, June 21, 2012 at 11:00 a.m. Panel Session The Future of Big Data: What's Next Oracle Panel Speaker: Ashok Joshi, Senior Director, Berkeley DB Development Be sure to stop by and visit Oracle booth #501, to see live demonstrations of Oracle Social Network and Oracle WebCenter!

    Read the article

  • Deploying an ADF Secure Application using WLS Console

    - by juan.ruiz
    Last week I worked on a requirement from a customer that wanted to understand how to deploy to WLS an application with ADF Security without using JDeveloper. The main question was, what steps where needed in order to set up Enterprise Roles, Security Policies and Application Credentials. In this entry I will explain the steps taken using JDeveloper 11.1.1.2. 0 Requirements: Instead of building a sample application from scratch, we can use Andrejus 's sample application that contains all the security pieces that we need. Open and migrate the project. Also make sure you adjust the database settings accordingly. Creating the EAR file Review the Security settings of the application by going into the Application -> Secure menu and see that there are two enterprise roles as well as the ADF Policies enforcing security on the main page. Make sure the Application Module uses the Data Source instead of JDBC URL for its connection type, also take note of the data source name - in my case I have: java:comp/env/jdbc/HrDS To facilitate the access to this application once we deploy it. Go to your ViewController project properties select the Java EE Application category and give it a meaningful name to the context root as well to the Application Name Go to the ADFSecurityWL Application properties -> Deployment  and create a new EAR deployment profile. Uncheck the Auto generate and Synchronize weblogic-jdbc.xml Descriptors During Deployment Deploy the application as an EAR file. Deploying the Application to WLS using the WLS Console On the WLS console create a JNDI data source. This is the part that I found more tricky of the hole exercise given that the name should match the AM's data source name, however the naming convention that worked for me was jdbc.HrDS Now, deploy the application manually by selecting deployments ->Install look for the EAR and follow the default steps. If this is the firs time you deploy the application, once the deployment finishes you will be asked to Activate Changes on the domain, these changes contain all the security policies and application roles insertion into the WLS instance. Creating Roles and User Groups for the Application To finish the after-deployment set up, we need to create the groups that are the equivalent of the Enterprise Roles of ADF Security. For our sample we have two Enterprise Roles employeesApplication and managersApplication. After that, we create the application users and assign them into their respective groups. Now we can run the application and test the security constraints

    Read the article

  • SQL SERVER – Rename Columnname or Tablename – SQL in Sixty Seconds #032 – Video

    - by pinaldave
    We all make mistakes at some point of time and we all change our opinion. There are quite a lot of people in the world who have changed their name after they have grown up. Some corrected their parent’s mistake and some create new mistake. Well, databases are not protected from such incidents. There are many reasons why developers may want to change the name of the column or table after it was initially created. The goal of this video is not to dwell on the reasons but to learn how we can rename the column and table. Earlier I have written the article on this subject over here: SQL SERVER – How to Rename a Column Name or Table Name. I have revised the same article over here and created this video. There is one very important point to remember that by changing the column name or table name one creates the possibility of errors in the application the columns and tables are used. When any column or table name is changed, the developer should go through every place in the code base, ad-hoc queries, stored procedures, views and any other place where there are possibility of their usage and change them to the new name. If this is one followed up religiously there are quite a lot of changes that application will stop working due to this name change.  One has to remember that changing column name does not change the name of the indexes, constraints etc and they will continue to reference the old name. Though this will not stop the show but will create visual un-comfort as well confusion in many cases. Here is my question back to you – have you changed ever column name or table name in production database (after project going live)? If yes, what was the scenario and need of doing it. After all it is just a name. Let me know what you think of this video. Here is the updated script. USE tempdb GO CREATE TABLE TestTable (ID INT, OldName VARCHAR(20)) GO INSERT INTO TestTable VALUES (1, 'First') GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the ColumnName sp_RENAME 'TestTable.OldName', 'NewName', 'Column' GO -- Check the Tabledata SELECT * FROM TestTable GO -- Rename the TableName sp_RENAME 'TestTable', 'NewTable' GO -- Check the Tabledata - Error SELECT * FROM TestTable GO -- Check the Tabledata - New SELECT * FROM NewTable GO -- Cleanup DROP TABLE NewTable GO Related Tips in SQL in Sixty Seconds: SQL SERVER – How to Rename a Column Name or Table Name What would you like to see in the next SQL in Sixty Seconds video? Reference: Pinal Dave (http://blog.sqlauthority.com) Filed under: Database, Pinal Dave, PostADay, SQL, SQL Authority, SQL in Sixty Seconds, SQL Query, SQL Scripts, SQL Server, SQL Server Management Studio, SQL Tips and Tricks, T SQL, Technology, Video Tagged: Excel

    Read the article

  • On StringComparison Values

    - by Jesse
    When you use the .NET Framework’s String.Equals and String.Compare methods do you use an overloStringComparison enumeration value? If not, you should be because the value provided for that StringComparison argument can have a big impact on the results of your string comparison. The StringComparison enumeration defines values that fall into three different major categories: Culture-sensitive comparison using a specific culture, defaulted to the Thread.CurrentThread.CurrentCulture value (StringComparison.CurrentCulture and StringComparison.CurrentCutlureIgnoreCase) Invariant culture comparison (StringComparison.InvariantCulture and StringComparison.InvariantCultureIgnoreCase) Ordinal (byte-by-byte) comparison of  (StringComparison.Ordinal and StringComparison.OrdinalIgnoreCase) There is a lot of great material available that detail the technical ins and outs of these different string comparison approaches. If you’re at all interested in the topic these two MSDN articles are worth a read: Best Practices For Using Strings in the .NET Framework: http://msdn.microsoft.com/en-us/library/dd465121.aspx How To Compare Strings: http://msdn.microsoft.com/en-us/library/cc165449.aspx Those articles cover the technical details of string comparison well enough that I’m not going to reiterate them here other than to say that the upshot is that you typically want to use the culture-sensitive comparison whenever you’re comparing strings that were entered by or will be displayed to users and the ordinal comparison in nearly all other cases. So where does that leave the invariant culture comparisons? The “Best Practices For Using Strings in the .NET Framework” article has the following to say: “On balance, the invariant culture has very few properties that make it useful for comparison. It does comparison in a linguistically relevant manner, which prevents it from guaranteeing full symbolic equivalence, but it is not the choice for display in any culture. One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display. For example, if a large data file that contains a list of sorted identifiers for display accompanies an application, adding to this list would require an insertion with invariant-style sorting.” I don’t know about you, but I feel like that paragraph is a bit lacking. Are there really any “real world” reasons to use the invariant culture comparison? I think the answer to this question is, “yes”, but in order to understand why we should first think about what the invariant culture comparison really does. The invariant culture comparison is really just a culture-sensitive comparison using a special invariant culture (Michael Kaplan has a great post on the history of the invariant culture on his blog: http://blogs.msdn.com/b/michkap/archive/2004/12/29/344136.aspx). This means that the invariant culture comparison will apply the linguistic customs defined by the invariant culture which are guaranteed not to differ between different machines or execution contexts. This sort of consistently does prove useful if you needed to maintain a list of strings that are sorted in a meaningful and consistent way regardless of the user viewing them or the machine on which they are being viewed. Example: Prototype Names Let’s say that you work for a large multi-national toy company with branch offices in 10 different countries. Each year the company would work on 15-25 new toy prototypes each of which is assigned a “code name” while it is under development. Coming up with fun new code names is a big part of the company culture that everyone really enjoys, so to be fair the CEO of the company spent a lot of time coming up with a prototype naming scheme that would be fun for everyone to participate in, fair to all of the different branch locations, and accessible to all members of the organization regardless of the country they were from and the language that they spoke. Each new prototype will get a code name that begins with a letter following the previously created name using the alphabetical order of the Latin/Roman alphabet. Each new year prototype names would start back at “A”. The country that leads the prototype development effort gets to choose the name in their native language. (An appropriate Romanization system will be used for countries where the primary language is not written in the Latin/Roman alphabet. For example, the Pinyin system could be used for Chinese). To avoid repeating names, a list of all current and past prototype names will be maintained on each branch location’s company intranet site. Assuming that maintaining a single pre-sorted list is not feasible among all of the highly distributed intranet implementations, what string comparison method would you use to sort each year’s list of prototype names so that the list is both meaningful and consistent regardless of the country within which the list is being viewed? Sorting the list with a culture-sensitive comparison using the default configured culture on each country’s intranet server the list would probably work most of the time, but subtle differences between cultures could mean that two different people would see a list that was sorted slightly differently. The CEO wants the prototype names to be a unifying aspect of company culture and is adamant that everyone see the the same list sorted in the same order and there’s no way to guarantee a consistent sort across different cultures using the culture-sensitive string comparison rules. The culture-sensitive sort would produce a meaningful list for the specific user viewing it, but it wouldn’t always be consistent between different users. Sorting with the ordinal comparison would certainly be consistent regardless of the user viewing it, but would it be meaningful? Let’s say that the current year’s prototype name list looks like this: Antílope (Spanish) Babouin (French) Cahoun (Czech) Diamond (English) Flosse (German) If you were to sort this list using ordinal rules you’d end up with: Antílope Babouin Diamond Flosse Cahoun This sort is no good because the entry for “C” appears the bottom of the list after “F”. This is because the Czech entry for the letter “C” makes use of a diacritic (accent mark). The ordinal string comparison does a byte-by-byte comparison of the code points that make up each character in the string and the code point for the “C” with the diacritic mark is higher than any letter without a diacritic mark, which pushes that entry to the bottom of the sorted list. The CEO wants each country to be able to create prototype names in their native language, which means we need to allow for names that might begin with letters that have diacritics, so ordinal sorting kills the meaningfulness of the list. As it turns out, this situation is actually well-suited for the invariant culture comparison. The invariant culture accounts for linguistically relevant factors like the use of diacritics but will provide a consistent sort across all machines that perform the sort. Now that we’ve walked through this example, the following line from the “Best Practices For Using Strings in the .NET Framework” makes a lot more sense: One of the few reasons to use StringComparison.InvariantCulture for comparison is to persist ordered data for a cross-culturally identical display That line describes the prototype name example perfectly: we need a way to persist ordered data for a cross-culturally identical display. While this example is 100% made-up, I think it illustrates that there are indeed real-world situations where the invariant culture comparison is useful.

    Read the article

  • Changing the default installation path to a newly installed hard disk

    - by mgj
    Hi, I am currently working on a dual-booted PC. I am using Windows XP and Ubuntu 10.04 Lucid Lynx released in April 2010. The allocated partition to Ubuntu that I am making use of has almost exhausted. Current memory allocations on the PC wrt Ubuntu OS looks like this: bodhgaya@pc146724-desktop:~$ df -h Filesystem Size Used Avail Use% Mounted on /dev/sda2 8.6G 8.0G 113M 99% / none 998M 268K 998M 1% /dev none 1002M 580K 1002M 1% /dev/shm none 1002M 100K 1002M 1% /var/run none 1002M 0 1002M 0% /var/lock none 1002M 0 1002M 0% /lib/init/rw /dev/sda1 25G 16G 9.8G 62% /media/C /dev/sdb1 37G 214M 35G 1% /media/ubuntulinuxstore bodhgaya@pc146724-desktop:~$ cd /tmp I am trying to mount a 40GB(/dev/sdb1 - given below) new hard disk along with my existing Ubuntu system to overcome with hard disk space related issues. I referred to the following tutorial to mount a new hard disk onto the system:- http://www.smorgasbord.net/how-to-in...untu-linux%20/ I was able to successfully mount this hard disk for Ubuntu 0S. I have this new hard disk setup in /media/ubuntulinuxstore directory. The current partition in my system looks like this: bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ sudo fdisk -l [sudo] password for bodhgaya: Disk /dev/sda: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x446eceb5 Device Boot Start End Blocks Id System /dev/sda1 * 2 3264 26210047+ 7 HPFS/NTFS /dev/sda2 3265 4385 9004432+ 83 Linux /dev/sda3 4386 4863 3839535 82 Linux swap / Solaris Disk /dev/sdb: 40.0 GB, 40000000000 bytes 255 heads, 63 sectors/track, 4863 cylinders Units = cylinders of 16065 * 512 = 8225280 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0xfa8afa8a Device Boot Start End Blocks Id System /dev/sdb1 1 4862 39053983+ 7 HPFS/NTFS bodhgaya@pc146724-desktop:/media/ubuntulinuxstore$ Now, I have a concern wrt the "location" where the new softwares will be installed. Generally softwares are installed via the terminal and by default a fixed path is used to where the post installation set up files can be found (I am talking in context of the drive). This is like the typical case of Windows, where softwares by default are installed in the C: drive. These days people customize their installations to a drive which they find apt to serve their purpose (generally based on availability of hard disk space). I am trying to figure out how to customize the same for Ubuntu. As we all know the most softwares are installed via commands given from the Terminal. My road block is how do I redirect the default path set on the terminal where files get installed to this new hard disk. This if done will help me overcome space constraints I am currently facing wrt the partition on which my Ubuntu is initially installed. I would also by this, save time on not formatting my system and reinstalling Ubuntu and other softwares all over again. Please help me with this, your suggestions are much appreciated.

    Read the article

  • Networking in VirtualBox

    - by Fat Bloke
    Networking in VirtualBox is extremely powerful, but can also be a bit daunting, so here's a quick overview of the different ways you can setup networking in VirtualBox, with a few pointers as to which configurations should be used and when. VirtualBox allows you to configure up to 8 virtual NICs (Network Interface Controllers) for each guest vm (although only 4 are exposed in the GUI) and for each of these NICs you can configure: Which virtualized NIC-type is exposed to the Guest. Examples include: Intel PRO/1000 MT Server (82545EM),  AMD PCNet FAST III (Am79C973, the default) or  a Paravirtualized network adapter (virtio-net). How the NIC operates with respect to your Host's physical networking. The main modes are: Network Address Translation (NAT) Bridged networking Internal networking Host-only networking NAT with Port-forwarding The choice of NIC-type comes down to whether the guest has drivers for that NIC.  VirtualBox, suggests a NIC based on the guest OS-type that you specify during creation of the vm, and you rarely need to modify this. But the choice of networking mode depends on how you want to use your vm (client or server) and whether you want other machines on your network to see it. So let's look at each mode in a bit more detail... Network Address Translation (NAT) This is the default mode for new vm's and works great in most situations when the Guest is a "client" type of vm. (i.e. most network connections are outbound). Here's how it works: When the guest OS boots,  it typically uses DHCP to get an IP address. VirtualBox will field this DHCP request and tell the guest OS its assigned IP address and the gateway address for routing outbound connections. In this mode, every vm is assigned the same IP address (10.0.2.15) because each vm thinks they are on their own isolated network. And when they send their traffic via the gateway (10.0.2.2) VirtualBox rewrites the packets to make them appear as though they originated from the Host, rather than the Guest (running inside the Host). This means that the Guest will work even as the Host moves from network to network (e.g. laptop moving between locations), and from wireless to wired connections too. However, how does another computer initiate a connection into a Guest?  e.g. connecting to a web server running in the Guest. This is not (normally) possible using NAT mode as there is no route into the Guest OS. So for vm's running servers we need a different networking mode.... Bridged Networking Bridged Networking is used when you want your vm to be a full network citizen, i.e. to be an equal to your host machine on the network. In this mode, a virtual NIC is "bridged" to a physical NIC on your host, like this: The effect of this is that each VM has access to the physical network in the same way as your host. It can access any service on the network such as external DHCP services, name lookup services, and routing information just as the host does. Logically, the network looks like this: The downside of this mode is that if you run many vm's you can quickly run out of IP addresses or your network administrator gets fed up with you asking for statically assigned IP addresses. Secondly, if your host has multiple physical NICs (e.g. Wireless and Wired) you must reconfigure the bridge when your host jumps networks.  Hmm, so what if you want to run servers in vm's but don't want to involve your network administrator? Maybe one of the next 2 modes is for you... Internal Networking When you configure one or more vm's to sit on an Internal network, VirtualBox ensures that all traffic on that network stays within the host and is only visible to vm's on that virtual network. Configuration looks like this: The internal network ( in this example "intnet" ) is a totally isolated network and so is very "quiet". This is good for testing when you need a separate, clean network, and you can create sophisticated internal networks with vm's that provide their own services to the internal network. (e.g. Active Directory, DHCP, etc). Note that not even the Host is a member of the internal network, but this mode allows vm's to function even when the Host is not connected to a network (e.g. on a plane). Note that in this mode, VirtualBox provides no "convenience" services such as DHCP, so your machines must be statically configured or one of the vm's needs to provide a DHCP/Name service. Multiple internal networks are possible and you can configure vm's to have multiple NICs to sit across internal and other network modes and thereby provide routes if needed. But all this sounds tricky. What if you want an Internal Network that the host participates on with VirtualBox providing IP addresses to the Guests? Ah, then for this, you might want to consider Host-only Networking... Host-only Networking Host-only Networking is like Internal Networking in that you indicate which network the Guest sits on, in this case, "vboxnet0": All vm's sitting on this "vboxnet0" network will see each other, and additionally, the host can see these vm's too. However, other external machines cannot see Guests on this network, hence the name "Host-only". Logically, the network looks like this: This looks very similar to Internal Networking but the host is now on "vboxnet0" and can provide DHCP services. To configure how a Host-only network behaves, look in the VirtualBox Manager...Preferences...Network dialog: Port-Forwarding with NAT Networking Now you may think that we've provided enough modes here to handle every eventuality but here's just one more... What if you cart around a mobile-demo or dev environment on, say, a laptop and you have one or more vm's that you need other machines to connect into? And you are continually hopping onto different (customer?) networks. In this scenario: NAT - won't work because external machines need to connect in. Bridged - possibly an option, but does your customer want you eating IP addresses and can your software cope with changing networks? Internal - we need the vm(s) to be visible on the network, so this is no good. Host-only - same problem as above, we want external machines to connect in to the vm's. Enter Port-forwarding to save the day! Configure your vm's to use NAT networking; Add Port Forwarding rules; External machines connect to "host":"port number" and connections are forwarded by VirtualBox to the guest:port number specified. For example, if your vm runs a web server on port 80, you could set up rules like this:  ...which reads: "any connections on port 8080 on the Host will be forwarded onto this vm's port 80".  This provides a mobile demo system which won't need re-configuring every time you open your laptop lid. Summary VirtualBox has a very powerful set of options allowing you to set up almost any configuration your heart desires. For more information, check out the VirtualBox User Manual on Virtual Networking. -FB 

    Read the article

  • Documenting C# Library using GhostDoc and SandCastle

    - by sreejukg
    Documentation is an essential part of any IT project, especially when you are creating reusable components that will be used by other developers (such as class libraries). Without documentation re-using a class library is almost impossible. Just think of coding .net applications without MSDN documentation (Ooops I can’t think of it). Normally developers, who know the bits and pieces of their classes, see this as a boring work to write details again to generate the documentation. Also the amount of work to make this and manage it changes made the process of manual creation of Documentation impossible or tedious. So what is the effective solution? Let me divide this into two steps 1. Generate comments for your code while you are writing the code. 2. Create documentation file using these comments. Now I am going to examine these processes. Step 1: Generate XML Comments automatically Most of the developers write comments for their code. The best thing is that the comments will be entered during the development process. Additionally comments give a good reference to the code, make your code more manageable/readable. Later these comments can be converted into documentation, along with your source code by identifying properties and methods I found an add-in for visual studio, GhostDoc that automatically generates XML documentation comments for C#. The add-in is available in Visual Studio Gallery at MSDN. You can download this from the url http://visualstudiogallery.msdn.microsoft.com/en-us/46A20578-F0D5-4B1E-B55D-F001A6345748. I downloaded the free version from the above url. The free version suits my requirement. There is a professional version (you need to pay some $ for this) available that gives you some more features. I found the free version itself suits my requirements. The installation process is straight forward. A couple of clicks will do the work for you. The best thing with GhostDoc is that it supports multiple versions of visual studio such as 2005, 2008 and 2010. After Installing GhostDoc, when you start Visual studio, the GhostDoc configuration dialog will appear. The first screen asks you to assign a hot key, pressing this hotkey will enter the comment to your code file with the necessary structure required by GhostDoc. Click Assign to go to the next step where you configure the rules for generating the documentation from the code file. Click Create to start creating the rules. Click finish button to close this wizard. Now you performed the necessary configuration required by GhostDoc. Now In Visual Studio tools menu you can find the GhostDoc that gives you some options. Now let us examine how GhostDoc generate comments for a method. I have write the below code in my code behind file. public Char GetChar(string str, int pos) { return str[pos]; } Now I need to generate the comments for this function. Select the function and enter the hot key assigned during the configuration. GhostDoc will generate the comments as follows. /// <summary> /// Gets the char. /// </summary> /// <param name="str">The STR.</param> /// <param name="pos">The pos.</param> /// <returns></returns> public Char GetChar(string str, int pos) { return str[pos]; } So this is a very handy tool that helps developers writing comments easily. You can generate the xml documentation file separately while compiling the project. This will be done by the C# compiler. You can enable the xml documentation creation option (checkbox) under Project properties -> Build tab. Now when you compile, the xml file will created under the bin folder. Step 2: Generate the documentation from the XML file Now you have generated the xml file documentation. Sandcastle is the tool from Microsoft that generates MSDN style documentation from the compiler produced XML file. The project is available in codeplex http://sandcastle.codeplex.com/. Download and install Sandcastle to your computer. Sandcastle is a command line tool that doesn’t have a rich GUI. If you want to automate the documentation generation, definitely you will be using the command line tools. Since I want to generate the documentation from the xml file generated in the previous step, I was expecting a GUI where I can see the options. There is a GUI available for Sandcastle called Sandcastle Help File Builder. See the link to the project in codeplex. http://www.codeplex.com/wikipage?ProjectName=SHFB. You need to install Sandcastle and then the Sandcastle Help file builder. From here I assume that you have installed both sandcastle and Sandcastle help file builder successfully. Once you installed the help file builder, it will be available in your all programs list. Click on the Sandcastle Help File Builder GUI, will launch application. First you need to create a project. Click on File -> New project The New project dialog will appear. Choose a folder to store your project file and give a name for your documentation project. Click the save button. Now you will see your project properties. Now from the Project explorer, right click on the Documentation Sources, Click on the Add Documentation Source link. A documentation source is a file such as an assembly or a Visual Studio solution or project from which information will be extracted to produce API documentation. From the Add Documentation source dialog, I have selected the XML file generated by my project. Once you add the xml file to the project, you will see the dll file automatically added by the help file builder. Now click on the build button. Now the application will generate the help file. The Build window gives to the result of each steps. Once the process completed successfully, you will have the following output in the build window. Now navigate to your Help Project (I have selected the folder My Documents\Documentation), inside help folder, you can find the chm file. Open the chm file will give you MSDN like documentation. Documentation is an important part of development life cycle. Sandcastle with GhostDoc make this process easier so that developers can implement the documentation in the projects with simple to use steps.

    Read the article

  • Rights Expiry Options in IRM 11g

    - by martin.abrahams
    Among the many enhancements in IRM 11g, we have introduced a couple of new rights expiry options that may be applied to any role. These options were supported in previous versions, but fell into the "advanced configuration" category. In 11g, the options can be applied simply by selecting a check-box in the properties of a role, as shown by the rather extreme example below, where the role allows access for just two minutes after they are sealed. The new options are: To define a role that expires automatically some period after it is assigned To define a role that evaluates expiry relative to the time that each document is sealed These options supplement the familiar options to allow open-ended access (limited by offline access and the ever-present option to revoke rights at any time) and the option to define time windows with specific start dates and end dates. The value of these options is easiest to illustrate with some publishing examples: You might define a role with a one year expiry to be assigned to users who purchase a one year subscription. For each individual user, the year would be calculated from the time that the role was assigned to them. You might define a role that allows documents to be accessed only for 24 hours from the time that they are published - perhaps as a preview mechanism designed to tempt users to sign up for a full subscription. Upon payment of a full fee, users can simply be reassigned a role that gives them greater access to exactly the same documents. In a corporate environment, you might use such roles for fixed term contractors or for workflows that involve information with a short lifespan, or perhaps as part of a compliance process that requires rights to be formally re-approved at intervals. Being role-based, the time constraints apply to any number of documents - including documents that have not yet been created. For example, a user with a one year subscription would have access to all documents published in the relevant classification during the year without any further configuration. Crucially, unlike other solutions, it is not the documents that expire, but the rights of particular users. Whereas some solutions make documents completely inaccessible for all users after expiry, Oracle IRM can allow some users to continue using documents while other users lose access. Equally crucially, a user whose rights have expired can always be granted fresh rights at any time - for example, because they renew their subscription or because a manager confirms that they still need the rights as part of a corporate compliance process. By applying expiry to rights rather than to documents, Oracle IRM avoids the risk of locking an organization out of its own information.

    Read the article

  • European Interoperability Framework - a new beginning?

    - by trond-arne.undheim
    The most controversial document in the history of the European Commission's IT policy is out. EIF is here, wrapped in the Communication "Towards interoperability for European public services", and including the new feature European Interoperability Strategy (EIS), arguably a higher strategic take on the same topic. Leaving EIS aside for a moment, the EIF controversy has been around IPR, defining open standards and about the proper terminology around standardization deliverables. Today, as the document finally emerges, what is the verdict? First of all, to be fair to those among you who do not spend your lives in the intricate labyrinths of Commission IT policy documents on interoperability, let's define what we are talking about. According to the Communication: "An interoperability framework is an agreed approach to interoperability for organisations that want to collaborate to provide joint delivery of public services. Within its scope of applicability, it specifies common elements such as vocabulary, concepts, principles, policies, guidelines, recommendations, standards, specifications and practices." The Good - EIF reconfirms that "The Digital Agenda can only take off if interoperability based on standards and open platforms is ensured" and also confirms that "The positive effect of open specifications is also demonstrated by the Internet ecosystem." - EIF takes a productive and pragmatic stance on openness: "In the context of the EIF, openness is the willingness of persons, organisations or other members of a community of interest to share knowledge and stimulate debate within that community, the ultimate goal being to advance knowledge and the use of this knowledge to solve problems" (p.11). "If the openness principle is applied in full: - All stakeholders have the same possibility of contributing to the development of the specification and public review is part of the decision-making process; - The specification is available for everybody to study; - Intellectual property rights related to the specification are licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software" (p. 26). - EIF is a formal Commission document. The former EIF 1.0 was a semi-formal deliverable from the PEGSCO, a working group of Member State representatives. - EIF tackles interoperability head-on and takes a clear stance: "Recommendation 22. When establishing European public services, public administrations should prefer open specifications, taking due account of the coverage of functional needs, maturity and market support." - The Commission will continue to support the National Interoperability Framework Observatory (NIFO), reconfirming the importance of coordinating such approaches across borders. - The Commission will align its internal interoperability strategy with the EIS through the eCommission initiative. - One cannot stress the importance of using open standards enough, whether in the context of open source or non-open source software. The EIF seems to have picked up on this fact: What does the EIF says about the relation between open specifications and open source software? The EIF introduces, as one of the characteristics of an open specification, the requirement that IPRs related to the specification have to be licensed on FRAND terms or on a royalty-free basis in a way that allows implementation in both proprietary and open source software. In this way, companies working under various business models can compete on an equal footing when providing solutions to public administrations while administrations that implement the standard in their own software (software that they own) can share such software with others under an open source licence if they so decide. - EIF is now among the center pieces of the Digital Agenda (even though this demands extensive inter-agency coordination in the Commission): "The EIS and the EIF will be maintained under the ISA Programme and kept in line with the results of other relevant Digital Agenda actions on interoperability and standards such as the ones on the reform of rules on implementation of ICT standards in Europe to allow use of certain ICT fora and consortia standards, on issuing guidelines on essential intellectual property rights and licensing conditions in standard-setting, including for ex-ante disclosure, and on providing guidance on the link between ICT standardisation and public procurement to help public authorities to use standards to promote efficiency and reduce lock-in.(Communication, p.7)" All in all, quite a few good things have happened to the document in the two years it has been on the shelf or was being re-written, depending on your perspective, in any case, awaiting the storms to calm. The Bad - While a certain pragmatism is required, and governments cannot migrate to full openness overnight, EIF gives a bit too much room for governments not to apply the openness principle in full. Plenty of reasons are given, which should maybe have been put as challenges to be overcome: "However, public administrations may decide to use less open specifications, if open specifications do not exist or do not meet functional interoperability needs. In all cases, specifications should be mature and sufficiently supported by the market, except if used in the context of creating innovative solutions". - EIF does not use the internationally established terminology: open standards. Rather, the EIF introduces the notion of "formalised specification". How do "formalised specifications" relate to "standards"? According to the FAQ provided: The word "standard" has a specific meaning in Europe as defined by Directive 98/34/EC. Only technical specifications approved by a recognised standardisation body can be called a standard. Many ICT systems rely on the use of specifications developed by other organisations such as a forum or consortium. The EIF introduces the notion of "formalised specification", which is either a standard pursuant to Directive 98/34/EC or a specification established by ICT fora and consortia. The term "open specification" used in the EIF, on the one hand, avoids terminological confusion with the Directive and, on the other, states the main features that comply with the basic principle of openness laid down in the EIF for European Public Services. Well, this may be somewhat true, but in reality, Europe is 30 year behind in terminology. Unless the European Standardization Reform gets completed in the next few months, most Member States will likely conclude that they will go on referencing and using standards beyond those created by the three European endorsed monopolists of standardization, CEN, CENELEC and ETSI. Who can afford to begin following the strict Brussels rules for what they can call open standards when, in reality, standards stemming from global standardization organizations, so-called fora/consortia, dominate in the IT industry. What exactly is EIF saying? Does it encourage Member States to go on using non-ESO standards as long as they call it something else? I guess I am all for it, although it is a bit cumbersome, no? Why was there so much interest around the EIF? The FAQ attempts to explain: Some Member States have begun to adopt policies to achieve interoperability for their public services. These actions have had a significant impact on the ecosystem built around the provision of such services, e.g. providers of ICT goods and services, standardisation bodies, industry fora and consortia, etc... The Commission identified a clear need for action at European level to ensure that actions by individual Member States would not create new electronic barriers that would hinder the development of interoperable European public services. As a result, all stakeholders involved in the delivery of electronic public services in Europe have expressed their opinions on how to increase interoperability for public services provided by the different public administrations in Europe. Well, it does not take two years to read 50 consultation documents, and the EU Standardization Reform is not yet completed, so, more pragmatically, you finally had to release the document. Ok, let's leave some of that aside because the document is out and some people are happy (and others definitely not). The Verdict Considering the controversy, the delays, the lobbying, and the interests at stake both in the EU, in Member States and among vendors large and small, this document is pretty impressive. As with a good wine that has not yet come to full maturity, let's say that it seems to be coming in in the 85-88/100 range, but only a more fine-grained analysis, enjoyment in good company, and ultimately, implementation, will tell. The European Commission has today adopted a significant interoperability initiative to encourage public administrations across the EU to maximise the social and economic potential of information and communication technologies. Today, we should rally around this achievement. Tomorrow, let's sit down and figure out what it means for the future.

    Read the article

  • Granular Clipboard Control in Oracle IRM

    - by martin.abrahams
    One of the main leak prevention controls that customers are looking for is clipboard control. After all, there is little point in controlling access to a document if authorised users can simply make unprotected copies by use of the cut and paste mechanism. Oddly, for such a fundamental requirement, many solutions only offer very simplistic clipboard control - and require the customer to make an awkward choice between usability and security. In many cases, clipboard control is simply an ON-OFF option. By turning the clipboard OFF, you disable one of the most valuable edit functions known to man. Try working for any length of time without copying and pasting, and you'll soon appreciate how valuable that function is. Worse, some solutions disable the clipboard completely - not just for the protected document but for all of the various applications you have open at the time. Normal service is only resumed when you close the protected document. In this way, policy enforcement bleeds out of the particular assets you need to protect and interferes with the entire user experience. On the other hand, turning the clipboard ON satisfies a fundamental usability requirement - but also makes it really easy for users to create unprotected copies of sensitive information, maliciously or otherwise. All they need to do is paste into another document. If creating unprotected copies is this simple, you have to question how much you are really gaining by applying protection at all. You may not be allowed to edit, forward, or print the protected asset, but all you need to do is create a copy and work with that instead. And that activity would not be tracked in any way. So, a simple ON-OFF control creates a real tension between usability and security. If you are only using IRM on a small scale, perhaps security can outweigh usability - the business can put up with the restriction if it only applies to a handful of important documents. But try extending protection to large numbers of documents and large user communities, and the restriction rapidly becomes really unwelcome. I am aware of one solution that takes a different tack. Rather than disable the clipboard, pasting is always permitted, but protection is automatically applied to any document that you paste into. At first glance, this sounds great - protection travels with the content. However, at any scale this model may not be so appealing once you've had to deal with support calls from users who have accidentally applied protection to documents that really don't need it - which would be all too easily done. This may help control leakage, but it also pollutes the system with documents that have policies applied with no obvious rhyme or reason, and it can seriously inconvenience the business by making non-sensitive documents difficult to access. And what policy applies if you paste some protected content into an already protected document? Which policy applies? There are no prizes for guessing that Oracle IRM takes a rather different approach. The Oracle IRM Approach Oracle IRM offers a spectrum of clipboard controls between the extremes of ON and OFF, and it leverages the classification-based rights model to give granular control that satisfies both security and usability needs. Firstly, we take it for granted that if you have EDIT rights, of course you can use the clipboard within a given document. Why would we force you to retype a piece of content that you want to move from HERE... to HERE...? If the pasted content remains in the same document, it is equally well protected whether it be at the beginning, middle, or end - or all three. So, the first point is that Oracle IRM always enables the clipboard if you have the right to edit the file. Secondly, whether we enable or disable the clipboard, we only affect the protected document. That is, you can continue to use the clipboard in the usual way for unprotected documents and applications regardless of whether the clipboard is enabled or disabled for the protected document(s). And if you have multiple protected documents open, each may have the clipboard enabled or disabled independently, according to whether you have Edit rights for each. So, even for the simplest cases - the ON-OFF cases - Oracle IRM adds value by containing the effect to the protected documents rather than to the whole desktop environment. Now to the granular options between ON and OFF. Thanks to our classification model, we can define rights that enable pasting between documents in the same classification - ie. between documents that are protected by the same policy. So, if you are working on this month's financial report and you want to pull some data from last month's report, you can simply cut and paste between the two documents. The two documents are classified the same way, subject to the same policy, so the content is equally safe in both documents. However, if you try to paste the same data into an unprotected document or a document in a different classification, you can be prevented. Thus, the control balances legitimate user requirements to allow pasting with legitimate information security concerns to keep data protected. We can take this further. You may have the right to paste between related classifications of document. So, the CFO might want to copy some financial data into a board document, where the two documents are sealed to different classifications. The CFO's rights may well allow this, as it is a reasonable thing for a CFO to want to do. But policy might prevent the CFO from copying the same data into a classification that is accessible to external parties. The above option, to copy between classifications, may be for specific classifications or open-ended. That is, your rights might enable you to go from A to B but not to C, or you might be allowed to paste to any classification subject to your EDIT rights. As for so many features of Oracle IRM, our classification-based rights model makes this type of granular control really easy to manage - you simply define that pasting is permitted between classifications A and B, but omit C. Or you might define that pasting is permitted between all classifications, but not to unprotected locations. The classification model enables millions of documents to be controlled by a few such rules. Finally, you MIGHT have the option to paste anywhere - such that unprotected copies may be created. This is rare, but a legitimate configuration for some users, some use cases, and some classifications - but not something that you have to permit simply because the alternative is too restrictive. As always, these rights are defined in user roles - so different users are subject to different clipboard controls as required in different classifications. So, where most solutions offer just two clipboard options - ON-OFF or ON-but-encrypt-everything-you-touch - Oracle IRM offers real granularity that leverages our classification model. Indeed, I believe it is the lack of a classification model that makes such granularity impractical for other IRM solutions, because the matrix of rules for controlling pasting would be impossible to manage - there are so many documents to consider, and more are being created all the time.

    Read the article

< Previous Page | 109 110 111 112 113 114 115 116 117 118 119 120  | Next Page >